Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Personalized Emotion Recognition by Personality-Aware High-Order Learning of Physiological Signals

Published: 24 January 2019 Publication History

Abstract

Due to the subjective responses of different subjects to physical stimuli, emotion recognition methodologies from physiological signals are increasingly becoming personalized. Existing works mainly focused on modeling the involved physiological corpus of each subject, without considering the psychological factors, such as interest and personality. The latent correlation among different subjects has also been rarely examined. In this article, we propose to investigate the influence of personality on emotional behavior in a hypergraph learning framework. Assuming that each vertex is a compound tuple (subject, stimuli), multi-modal hypergraphs can be constructed based on the personality correlation among different subjects and on the physiological correlation among corresponding stimuli. To reveal the different importance of vertices, hyperedges, and modalities, we learn the weights for each of them. As the hypergraphs connect different subjects on the compound vertices, the emotions of multiple subjects can be simultaneously recognized. In this way, the constructed hypergraphs are vertex-weighted multi-modal multi-task ones. The estimated factors, referred to as emotion relevance, are employed for emotion recognition. We carry out extensive experiments on the ASCERTAIN dataset and the results demonstrate the superiority of the proposed method, as compared to the state-of-the-art emotion recognition approaches.

References

[1]
Mojtaba Khomami Abadi, Juan Abdón Miranda Correa, Julia Wache, Heng Yang, Ioannis Patras, and Nicu Sebe. 2015. Inference of personality traits and affect schedule by analysis of spontaneous reactions to affective videos. In IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, Vol. 1. 1--8.
[2]
Mojtaba Khomami Abadi, Ramanathan Subramanian, Seyed Mostafa Kia, Paolo Avesani, Ioannis Patras, and Nicu Sebe. 2015. DECAF: MEG-based multimodal database for decoding affective physiological responses. IEEE Transactions on Affective Computing 6, 3 (2015), 209--222.
[3]
Hussein Al Osman and Tiago H. Falk. 2017. Multimodal affect recognition: Current approaches and challenges. In Emotion and Attention Recognition Based on Biological Signals and Images. InTech.
[4]
Xavier Alameda-Pineda, Elisa Ricci, Yan Yan, and Nicu Sebe. 2016. Recognizing emotions from abstract paintings using non-linear matrix completion. In IEEE Conference on Computer Vision and Pattern Recognition. 5240--5248.
[5]
Pradeep K. Atrey, M. Anwar Hossain, Abdulmotaleb El Saddik, and Mohan S. Kankanhalli. 2010. Multimodal fusion for multimedia analysis: A survey. Multimedia Systems 16, 6 (2010), 345--379.
[6]
Yoann Baveye, Emmanuel Dellandrea, Christel Chamaret, and Liming Chen. 2015. Liris-accede: A video database for affective content analysis. IEEE Transactions on Affective Computing 6, 1 (2015), 43--55.
[7]
Jiajun Bu, Shulong Tan, Chun Chen, Can Wang, Hao Wu, Lijun Zhang, and Xiaofei He. 2010. Music recommendation by unified hypergraph: Combining social media information and music content. In ACM International Conference on Multimedia. 391--400.
[8]
Elizabeth Camilleri, Georgios N. Yannakakis, and Antonios Liapis. 2017. Towards general models of player affect. In International Conference on Affective Computing and Intelligent Interaction. 333--339.
[9]
Paul T. Costa and Robert R. MacCrae. 1992. Revised NEO Personality Inventory (NEO PI-R) and NEO Five-factor Inventory (NEO-FFI): Professional Manual. Psychological Assessment Resources, Incorporated.
[10]
Sidney K. D’mello and Jacqueline Kory. 2015. A review and meta-analysis of multimodal affect detection systems. Comput. Surveys 47, 3 (2015), 43.
[11]
Ellen Douglas-Cowie, Roddy Cowie, Ian Sneddon, Cate Cox, Orla Lowry, Margaret Mcrorie, Jean-Claude Martin, Laurence Devillers, Sarkis Abrilian, Anton Batliner, and others. 2007. The HUMAINE database: Addressing the collection and annotation of naturalistic and induced emotional data. In International Conference on Affective Computing and Intelligent Interaction. 488--500.
[12]
Nico H. Frijda. 1986. The Emotions. Cambridge University Press.
[13]
Yue Gao, Meng Wang, Dacheng Tao, Rongrong Ji, and Qionghai Dai. 2012. 3-D object retrieval and recognition with hypergraph analysis. IEEE Transactions on Image Processing 21, 9 (2012), 4290--4303.
[14]
Yue Gao, Meng Wang, Zheng-Jun Zha, Jialie Shen, Xuelong Li, and Xindong Wu. 2013. Visual-textual joint relevance learning for tag-based social image search. IEEE Transactions on Image Processing 22, 1 (2013), 363--376.
[15]
Anastasia Giachanou and Fabio Crestani. 2016. Like it or not: A survey of Twitter sentiment analysis methods. Comput. Surveys 49, 2 (2016), 28.
[16]
Hatice Gunes and Massimo Piccardi. 2005. Affect recognition from face and body: Early fusion vs. late fusion. In IEEE International Conference on Systems, Man and Cybernetics, Vol. 4. 3437--3443.
[17]
R. Hamed, Adham Atyabi, Antti Rantanen, Seppo J. Laukka, Samia Nefti-Meziani, Janne Heikkilä, and others. 2015. Predicting the valence of a scene from observersąŕ eye movements. PloS One 10, 9 (2015), e0138198.
[18]
Rui Henriques and Ana Paiva. 2014. Seven principles to mine flexible behavior from physiological signals for effective emotion recognition and description in affective interactions. In International Conference on Physiological Computing Systems. 75--82.
[19]
Rui Henriques, Ana Paiva, and Claudia Antunes. 2013. Accessing emotion patterns from affective interactions using electrodermal activity. In Humaine Association Conference on Affective Computing and Intelligent Interaction. 43--48.
[20]
Yuchi Huang, Qingshan Liu, Shaoting Zhang, and Dimitris Metaxas. 2010. Image retrieval via probabilistic hypergraph ranking. In IEEE Conference on Computer Vision and Pattern Recognition. 3376--3383.
[21]
Hideo Joho, Jacopo Staiano, Nicu Sebe, and Joemon M. Jose. 2011. Looking at the viewer: Analysing facial activity to detect personal highlights of multimedia contents. Multimedia Tools and Applications 51, 2 (2011), 505--523.
[22]
Dhiraj Joshi, Ritendra Datta, Elena Fedorovskaya, Quang-Tuan Luong, James Z. Wang, Jia Li, and Jiebo Luo. 2011. Aesthetics and emotions in images. IEEE Signal Processing Magazine 28, 5 (2011), 94--115.
[23]
Patrik N. Juslin and Petri Laukka. 2004. Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of New Music Research 33, 3 (2004), 217--238.
[24]
Elizabeth G. Kehoe, John M. Toomey, Joshua H. Balsters, and Arun L. W. Bokde. 2012. Personality modulates the effects of emotional arousal and valence on brain activation. Social Cognitive and Affective Neuroscience 7, 7 (2012), 858--870.
[25]
Jonghwa Kim and Elisabeth André. 2008. Emotion recognition based on physiological changes in music listening. IEEE Transactions on Pattern Analysis and Machine Intelligence 30, 12 (2008), 2067--2083.
[26]
Yelin Kim and Emily Mower Provost. 2015. Emotion recognition during speech using dynamics of multiple regions of the face. ACM Transactions on Multimedia Computing, Communications, and Applications 12, 1s (2015), 25.
[27]
Sander Koelstra, Christian Muhl, Mohammad Soleymani, Jong-Seok Lee, Ashkan Yazdani, Touradj Ebrahimi, Thierry Pun, Anton Nijholt, and Ioannis Patras. 2012. DEAP: A database for emotion analysis using physiological signals. IEEE Transactions on Affective Computing 3, 1 (2012), 18--31.
[28]
Ting Li, Yoann Baveye, Christel Chamaret, Emmanuel Dellandréa, and Liming Chen. 2015. Continuous arousal self-assessments validation using real-time physiological responses. In ACM International Workshop on Affect 8 Sentiment in Multimedia. ACM, 39--44.
[29]
Christine Lætitia Lisetti and Fatma Nasoz. 2004. Using noninvasive wearable computers to recognize human emotions from physiological signals. EURASIP Journal on Advances in Signal Processing 2004, 11 (2004), 929414.
[30]
Hector P. Martinez, Yoshua Bengio, and Georgios N. Yannakakis. 2013. Learning deep physiological models of affect. IEEE Computational Intelligence Magazine 8, 2 (2013), 20--33.
[31]
Juan Abdon Miranda-Correa, Mojtaba Khomami Abadi, Nicu Sebe, and Ioannis Patras. 2017. AMIGOS: A dataset for Mood, personality, and affect research on Individuals and GrOupS. Arxiv Preprint Arxiv:1702.02510 (2017).
[32]
Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y. Ng. 2011. Multimodal deep learning. In International Conference on Machine Learning. 689--696.
[33]
Marco Perugini and Lisa Di Blas. 2002. Analyzing personality related adjectives from an eticemic perspective: The big five marker scales (BFMS) and the Italian AB5C taxonomy. Big Five Assessment (2002), 281--304.
[34]
Soujanya Poria, Erik Cambria, Rajiv Bajpai, and Amir Hussain. 2017. A review of affective computing: From unimodal analysis to multimodal fusion. Information Fusion 37 (2017), 98--125.
[35]
Pulak Purkait, Tat-Jun Chin, Alireza Sadri, and David Suter. 2017. Clustering with hypergraphs: The case for large hyperedges. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 9 (2017), 1697--1711.
[36]
Yangyang Shu and Shangfei Wang. 2017. Emotion recognition through integrating EEG and peripheral signals. In IEEE International Conference on Acoustics, Speech and Signal Processing. 2871--2875.
[37]
Cees G. M. Snoek, Marcel Worring, and Arnold W. M. Smeulders. 2005. Early versus late fusion in semantic video analysis. In ACM International Conference on Multimedia. 399--402.
[38]
Mohammad Soleymani, Jeroen Lichtenauer, Thierry Pun, and Maja Pantic. 2012. A multimodal database for affect recognition and implicit tagging. IEEE Transactions on Affective Computing 3, 1 (2012), 42--55.
[39]
Robert C. Solomon. 1993. The passions: Emotions and the Meaning of Life. Hackett Publishing.
[40]
Lifan Su, Yue Gao, Xibin Zhao, Hai Wan, Ming Gu, and Jiaguang Sun. 2017. Vertex-weighted hypergraph learning for multi-view object classification. In International Joint Conferences on Artificial Intelligence. 2779--2785.
[41]
Ramanathan Subramanian, Divya Shankar, Nicu Sebe, and David Melcher. 2014. Emotion modulates eye movement patterns and subsequent memory for the gist and details of movie scenes. Journal of Vision 14, 3 (2014), 31:1--31:18.
[42]
Ramanathan Subramanian, Julia Wache, Mojtaba Abadi, Radu Vieriu, Stefan Winkler, and Nicu Sebe. 2018. ASCERTAIN: Emotion and personality recognition using commercial sensors. IEEE Transactions on Affective Computing 9, 2 (2018), 147--160.
[43]
Simone Tognetti, Maurizio Garbarino, Andrea Bonarini, and Matteo Matteucci. 2010. Modeling enjoyment preference from physiological responses in a car racing game. In IEEE Conference on Computational Intelligence and Games. 321--328.
[44]
Giel Van Lankveld, Pieter Spronck, Jaap Van den Herik, and Arnoud Arntz. 2011. Games as personality profiling tools. In IEEE Conference on Computational Intelligence and Games. 197--202.
[45]
Alessandro Vinciarelli and Gelareh Mohammadi. 2014. A survey of personality computing. IEEE Transactions on Affective Computing 5, 3 (2014), 273--291.
[46]
Johannes Wagner, Elisabeth Andre, Florian Lingenfelser, and Jonghwa Kim. 2011. Exploring fusion methods for multimodal emotion recognition with missing data. IEEE Transactions on Affective Computing 2, 4 (2011), 206--218.
[47]
Meng Wang, Xian-Sheng Hua, Richang Hong, Jinhui Tang, Guo-Jun Qi, and Yan Song. 2009. Unified video annotation via multigraph learning. IEEE Transactions on Circuits and Systems for Video Technology 19, 5 (2009), 733--746.
[48]
Shangfei Wang and Qiang Ji. 2015. Video affective content analysis: A survey of state-of-the-art methods. IEEE Transactions on Affective Computing 6, 4 (2015), 410--430.
[49]
Longyin Wen, Wenbo Li, Junjie Yan, Zhen Lei, Dong Yi, and Stan Z. Li. 2014. Multiple target tracking based on undirected hierarchical relation hypergraph. In IEEE Conference on Computer Vision and Pattern Recognition. 1282--1289.
[50]
Kathy A. Winter and Nicholas A. Kuiper. 1997. Individual differences in the experience of emotions. Clinical Psychology Review 17, 7 (1997), 791--821.
[51]
Yang Yang, Jia Jia, Shumei Zhang, Boya Wu, Qicong Chen, Juanzi Li, Chunxiao Xing, and Jie Tang. 2014. How do your friends on social media disclose your emotions? In AAAI Conference on Artificial Intelligence. 306--312.
[52]
Yi-Hsuan Yang and Homer H. Chen. 2012. Machine recognition of music emotion: A review. ACM Transactions on Intelligent Systems and Technology 3, 3 (2012), 40.
[53]
Georgios N. Yannakakis, Roddy Cowie, and Carlos Busso. 2017. The ordinal nature of emotions. In International Conference on Affective Computing and Intelligent Interaction. 248--255.
[54]
Chao Yao, Jimin Xiao, Tammam Tillo, Yao Zhao, Chunyu Lin, and Huihui Bai. 2016. Depth map down-sampling and coding based on synthesized view distortion. IEEE Transactions on Multimedia 18, 10 (2016), 2015--2022.
[55]
Quanzeng You, Liangliang Cao, Hailin Jin, and Jiebo Luo. 2016. Robust visual-textual sentiment analysis: When attention meets tree-structured recursive neural networks. In ACM International Conference on Multimedia. 1008--1017.
[56]
Sicheng Zhao, Guiguang Ding, Yue Gao, and Jungong Han. 2017. Approximating discrete probability distribution of image emotions by multi-modal features fusion. In International Joint Conference on Artificial Intelligence. 466--4675.
[57]
Sicheng Zhao, Guiguang Ding, Yue Gao, and Jungong Han. 2017. Learning visual emotion distributions via multi-modal features fusion. In ACM International Conference on Multimedia. 369--377.
[58]
Sicheng Zhao, Guiguang Ding, Yue Gao, Xin Zhao, Youbao Tang, Jungong Han, Hongxun Yao, and Qingming Huang. 2018. Discrete probability distribution prediction of image emotions with shared sparse learning. IEEE Transactions on Affective Computing (2018).
[59]
Sicheng Zhao, Guiguang Ding, Jungong Han, and Yue Gao. 2018. Personality-aware personalized emotion recognition from physiological signals. In International Joint Conferences on Artificial Intelligence. 1660--1667.
[60]
Sicheng Zhao, Yue Gao, Guiguang Ding, and Tat-Seng Chua. 2018. Real-time multimedia social event detection in microblog. IEEE Transactions on Cybernetics 48, 11 (2018) 3218--3231.
[61]
Sicheng Zhao, Yue Gao, Xiaolei Jiang, Hongxun Yao, Tat-Seng Chua, and Xiaoshuai Sun. 2014. Exploring principles-of-art features for image emotion recognition. In ACM International Conference on Multimedia. 47--56.
[62]
Sicheng Zhao, Hongxun Yao, Yue Gao, Guiguang Ding, and Tat-Seng Chua. 2018. Predicting personalized image emotion perceptions in social networks. IEEE Transactions on Affective Computing 9, 4 (2018), 526--540.
[63]
Sicheng Zhao, Hongxun Yao, Yue Gao, Rongrong Ji, and Guiguang Ding. 2017. Continuous probability distribution prediction of image emotions via multitask shared sparse regression. IEEE Transactions on Multimedia 19, 3 (2017), 632--645.
[64]
Sicheng Zhao, Hongxun Yao, Yue Gao, Rongrong Ji, Wenlong Xie, Xiaolei Jiang, and Tat-Seng Chua. 2016. Predicting personalized emotion perceptions of social images. In ACM International Conference on Multimedia. 1385--1394.
[65]
Sicheng Zhao, Hongxun Yao, You Yang, and Yanhao Zhang. 2014. Affective image retrieval via multi-graph learning. In ACM International Conference on Multimedia. 1025--1028.
[66]
Dengyong Zhou, Jiayuan Huang, and Bernhard Schölkopf. 2006. Learning with hypergraphs: Clustering, classification, and embedding. In Advances in Neural Information Processing Systems. 1601--1608.

Cited By

View all
  • (2025)Affective Video Content Analysis: Decade Review and New PerspectivesBig Data Mining and Analytics10.26599/BDMA.2024.90200488:1(118-144)Online publication date: Feb-2025
  • (2025)Modeling High-Order Relationships Between Human and Video for Emotion Recognition in Video LearningMultiMedia Modeling10.1007/978-981-96-2064-7_1(3-16)Online publication date: 9-Jan-2025
  • (2024)EEG-Based Multimodal Emotion Recognition: A Machine Learning PerspectiveIEEE Transactions on Instrumentation and Measurement10.1109/TIM.2024.336913073(1-29)Online publication date: 2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Multimedia Computing, Communications, and Applications
ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 15, Issue 1s
Special Section on Deep Learning for Intelligent Multimedia Analytics and Special Section on Multi-Modal Understanding of Social, Affective and Subjective Attributes of Data
January 2019
265 pages
ISSN:1551-6857
EISSN:1551-6865
DOI:10.1145/3309769
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 January 2019
Accepted: 01 June 2018
Revised: 01 April 2018
Received: 01 October 2017
Published in TOMM Volume 15, Issue 1s

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Personalized emotion recognition
  2. hypergraph learning
  3. multi-modal fusion
  4. personality-sensitive learning
  5. physiological signal analysis

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

  • National Key R&D Program of China
  • Berkeley Deep Drive
  • National Natural Science Foundation of China
  • Royal Society Newton Mobility Grant
  • China Postdoctoral Science Foundation

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)76
  • Downloads (Last 6 weeks)5
Reflects downloads up to 13 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Affective Video Content Analysis: Decade Review and New PerspectivesBig Data Mining and Analytics10.26599/BDMA.2024.90200488:1(118-144)Online publication date: Feb-2025
  • (2025)Modeling High-Order Relationships Between Human and Video for Emotion Recognition in Video LearningMultiMedia Modeling10.1007/978-981-96-2064-7_1(3-16)Online publication date: 9-Jan-2025
  • (2024)EEG-Based Multimodal Emotion Recognition: A Machine Learning PerspectiveIEEE Transactions on Instrumentation and Measurement10.1109/TIM.2024.336913073(1-29)Online publication date: 2024
  • (2024)Personalized Multimodal Emotion Recognition: Integrating Temporal Dynamics and Individual Traits for Enhanced Performance2024 IEEE 14th International Symposium on Chinese Spoken Language Processing (ISCSLP)10.1109/ISCSLP63861.2024.10800502(408-412)Online publication date: 7-Nov-2024
  • (2024)Cross-modal credibility modelling for EEG-based multimodal emotion recognitionJournal of Neural Engineering10.1088/1741-2552/ad3987Online publication date: 2-Apr-2024
  • (2024)Emotion Prediction in Real-Life Scenarios: On the Way to the BIRAFFE3 DatasetArtificial Intelligence for Neuroscience and Emotional Systems10.1007/978-3-031-61140-7_44(465-475)Online publication date: 31-May-2024
  • (2024)Multimodal emotion recognition: A comprehensive review, trends, and challengesWIREs Data Mining and Knowledge Discovery10.1002/widm.156314:6Online publication date: 8-Oct-2024
  • (2023)Integrating audio and visual modalities for multimodal personality trait recognition via hybrid deep learningFrontiers in Neuroscience10.3389/fnins.2022.110728416Online publication date: 6-Jan-2023
  • (2023)GJFusion: A Channel-Level Correlation Construction Method for Multimodal Physiological Signal FusionACM Transactions on Multimedia Computing, Communications, and Applications10.1145/361750320:2(1-23)Online publication date: 18-Oct-2023
  • (2023)Weakly-Supervised Learning for Fine-Grained Emotion Recognition Using Physiological SignalsIEEE Transactions on Affective Computing10.1109/TAFFC.2022.315823414:3(2304-2322)Online publication date: 1-Jul-2023
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media