Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3290605.3300379acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

The Invisible Potential of Facial Electromyography: A Comparison of EMG and Computer Vision when Distinguishing Posed from Spontaneous Smiles

Published: 02 May 2019 Publication History

Abstract

Positive experiences are a success metric in product and service design. Quantifying smiles is a method of assessing them continuously. Smiles are usually a cue of positive affect, but they can also be fabricated voluntarily. Automatic detection is a promising complement to human perception in terms of identifying the differences between smile types. Computer vision (CV) and facial distal electromyography (EMG) have been proven successful in this task. This is the first study to use a wearable EMG that does not obstruct the face to compare the performance of CV and EMG measurements in the task of distinguishing between posed and spontaneous smiles. The results showed that EMG has the advantage of being able to identify covert behavior not available through vision. Moreover, CV appears to be able to identify visible dynamic features that human judges cannot account for. This sheds light on the role of non-observable behavior in distinguishing affect-related smiles from polite positive affect displays.

References

[1]
Tadas Baltrusaitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency. 2018. OpenFace 2.0: facial behavior analysis toolkit. In CHI 2019, May 4--9, 2019, Glasgow, Scotland Uk M. Perusquía-Hernández et al. 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, 59--66.
[2]
Marian Stewart Bartlett, Gwen C. Littlewort, Mark G. Frank, and Kang Lee. 2014. Automatic decoding of facial movements reveals deceptive pain expressions. Current Biology 24, 7 (mar 2014), 738--743.
[3]
Michael J. Bernstein, Donald F. Sacco, Christina M. Brown, Steven G. Young, and Heather M. Claypool. 2010. A preference for genuine smiles following social exclusion. Journal of Experimental Social Psychology 46, 1 (2010), 196--199.
[4]
Vinay Bettadapura. 2012. Face expression recognition and analysis: the state of the art. CoRR (2012), 1--27. arXiv:1203.6722
[5]
John T. Cacioppo and Louis G. Tassinary. 1990. Inferring psychological significance from physiological signals. American Psychologist 45, 1 (1990), 16--28.
[6]
Rafael A. Calvo and Sidney D. Mello. 2010. Affect detection: an interdisciplinary review of models, methods, and their applications. IEEE Transactions on Affective Computing 1, September (2010), 18--37.
[7]
Yumiao Chen, Zhongliang Yang, and Jiangping Wang. 2015. Eyebrow emotional expression recognition using surface EMG signals. Neurocomputing 168 (2015), 871--879.
[8]
J. F. Cohn and K.L. Schmidt. 2004. The timing of facial motion in posed and spontaneous smiles. International Journal of Wavelets, Multiresolution and Information Processing 2 (2004), 121--132.
[9]
Pierre Comon. 1994. Independent component analysis, a new concept? Signal Processing 36, 36 (1994). http://mlsp.cs.cmu.edu/courses/ fall2012/lectures/ICA.pdf
[10]
Mihaly Csikszentmihalyi and Reed Larson. 2014. Validity and reliability of the Experience-Sampling Method. In Flow and the Foundations of Positive Psychology. Springer Netherlands, Dordrecht, 35--54.
[11]
Amy Dawel, Luke Wright, Jessica Irons, Rachael Dumbleton, Romina Palermo, Richard O'Kearney, and Elinor McKone. 2017. Perceived emotion genuineness: normative ratings for popular facial expression stimuli and the development of perceived-as-genuine and perceived as-fake sets. Behavior Research Methods 49, 4 (2017), 1539--1562.
[12]
Hamdi Dibeklioglu, Albert Ali Salah, and Theo Gevers. 2015. Recognition of genuine smiles. IEEE Transactions on Multimedia 17, 3 (2015), 279--294.
[13]
Guillaume-Benjamin Duchenne. 1862. Mécanisme de la Physionomie Humaine. Jules Renouard, Paris.
[14]
Paul Ekman, Wallace Friesen, and Joseph Hager. 2002. FACS investigator's guide. (2002).
[15]
Paul Ekman and Erika Rosenberg. 2005. What the face reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS) (second edition ed.). Oxford University Press. 1--20, 453--486 pages.
[16]
Atsushi Funahashi, Anna Gruebler, Takeshi Aoki, Hideki Kadone, and Kenji Suzuki. 2014. Brief report: the smiles of a child with autism spectrum disorder during an animal-assisted activity may facilitate social positive behaviors - Quantitative analysis with smile-detecting interface. Journal of Autism and Developmental Disorders 44, 3 (2014), 685--693.
[17]
Reuma Gadassi and Nilly Mor. 2016. Confusing acceptance and mere politeness: Depression and sensitivity to Duchenne smiles. Journal of Behavior Therapy and Experimental Psychiatry 50 (2016), 8--14.
[18]
Anna Gruebler and Kenji Suzuki. 2014. Design of a Wearable Device for Reading Positive Expressions from Facial EMG Signals. IEEE Transactions on Affective Computing PP, 99 (2014), 1--1.
[19]
Hui Guo, Xiao-hui Zhang, Jun Liang, and Wen-jing Yan. 2018. The dynamic features of lip corners in genuine and posed smiles. Frontiers in psychology 9, February (2018), 1--11.
[20]
Mohammed Hoque, Louis Philippe Morency, and Rosalind W. Picard. 2011. Are you friendly or just polite? - Analysis of smiles in spontaneous face-to-face interactions. In Affective Computing and Intelligent Interaction. Lecture Notes in Computer Science, Sidney D'Mello (Ed.). Vol. 6974. Springer Berlin Heidelberg, 135--144.
[21]
Aapo Hyvärinen and Erkki Oja. 2000. Independent component analysis: algorithms and applications. Neural networks: the official journal of the International Neural Network Society 13, 4--5 (2000), 411--30.
[22]
Rachael E. Jack, Oliver G.B. Garrod, and Philippe G. Schyns. 2014. Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time. Current Biology 24, 2 (2014), 187--192.
[23]
Joris H. Janssen, Paul Tacken, J.J.G. (Gert-Jan) de Vries, Egon L. van den Broek, Joyce H.D.M. Westerink, Pim Haselager, and Wijnand A. IJsselsteijn. 2013. Machines outperform laypersons in recognizing emotions elicited by autobiographical recollection. Human--Computer Interaction 28, 6 (2013), 479--517.
[24]
Jussi P.P. Jokinen. 2015. Emotional user experience: traits, events, and states. International Journal of Human Computer Studies 76 (2015), 67--77.
[25]
Eva G. Krumhuber, Katja U. Likowski, and Peter Weyers. 2014. Facial mimicry of spontaneous and deliberate Duchenne and Non-Duchenne smiles. Journal of Nonverbal Behavior 38, 1 (2014), 1--11.
[26]
Eva G Krumhuber and Antony S. R. Manstead. 2013. Effects of dynamic aspects of facial expressions: a review. Emotion Review 5, 1 (2013), 41--46.
[27]
P.J. Lang, M.M. Bradley, and B.N. Cuthbert. 2008. International Affective Picture System (IAPS). Technical Report. University of Florida, Gainesville, FL. arXiv:0005--7916(93)E0016-Z
[28]
Reed Larson and Mihaly Csikszentmihalyi. 1983. The Experience Sampling Method. New Directions for Methodology of Social & Behavioral Science 15 (1983), 41--56.
[29]
Ji-Ye Mao, Karel Vredenburg, Paul W. Smith, and Tom Carey. 2005. The state of user-centered design practice. Commun. ACM 48, 3 (2005), 105--109.
[30]
Mohammad Mavadati, Peyten Sanger, Mohammad H Mahoor, and S York Street. 2016. Extended DISFA dataset: investigating posed and spontaneous facial expressions. (2016), 8 pages.
[31]
Shushi Namba, Russell S. Kabir, Makoto Miyatani, and Takashi Nakao. 2018. Dynamic displays enhance the ability to discriminate genuine and posed facial expressions of emotion. Frontiers in Psychology 9 (2018), 672.
[32]
Shushi Namba, Shoko Makihara, Russell S. Kabir, Makoto Miyatani, and Takashi Nakao. 2016. Spontaneous facial expressions are different from posed facial expressions: morphological properties and dynamic sequences. (2016), 13 pages.
[33]
Lindsay M. Oberman, Piotr Winkielman, and Vilayanur S. Ramachandran. 2007. Face to face: blocking facial mimicry can selectively impair recognition of emotional expressions. Social neuroscience 2, 3--4 (2007), 167--78.
[34]
Lindsay M. Oberman, Piotr Winkielman, and Vilayanur S. Ramachandran. 2009. Slow echo: facial EMG evidence for the delay of spontaneous, but not voluntary, emotional mimicry in children with autism spectrum disorders. 4 (2009), 510--520.
[35]
Monica Perusquía-Hernández, Saho Ayabe-Kanamura, and Kenji Suzuki. 2018. Human perception and biosignal-based identification of posed and spontaneous smiles. Manuscript in preparation. (2018).
[36]
Monica Perusquía-Hernández, Masakazu Hirokawa, and Kenji Suzuki. 2017. A wearable device for fast and subtle spontaneous smile recognition. IEEE Transactions on Affective Computing 8, 4 (2017), 522--533.
[37]
Monica Perusquía-Hernández, Masakazu Hirokawa, and Kenji Suzuki. 2017. Spontaneous and posed smile recognition based on spatial and temporal patterns of facial EMG. In Affective Computing and Intelligent Interaction. 537--541.
[38]
James A. Russell, Anna Weiss, and Gerald A. Mendelsohn. 1989. Affect Grid: a single-item scale of pleasure and arousal. Journal of Personality and Social Psychology 57, 3 (1989), 493--502.
[39]
Karen Schmidt, Sharika Bhattacharya, and Rachel Denlinger. 2009. Comparison of deliberate and spontaneous facial movement in smiles and eyebrow raises. Nonverbal Behaviour 33, 1 (2009), 35--45.
[40]
Karen L. Schmidt, Zara Ambadar, Jeffrey F. Cohn, and L. Ian Reed. 2006. Movement differences between deliberate and spontaneous facial expressions: zygomaticus major action in smiling. Journal of Nonverbal Behavior 30, 1 (2006), 37--52.
[41]
K. L. Schmidt and J. F. Cohn. 2001. Dynamics of facial expression: normative characteristics and individual differences. In IEEE Proceedings of International Conference on Multimedia and Expo. IEEE, Tokyo, 728--731.
[42]
Ruiting Song, Harriet Over, and Malinda Carpenter. 2016. Young children discriminate genuine from fake smiles and expect people displaying genuine smiles to be more prosocial. Evolution and Human Behavior 37, 6 (2016), 490--501.
[43]
Yuji Takano and Kenji Suzuki. 2014. Affective communication aid using wearable devices based on biosignals. In Proceedings of the 2014 conference on Interaction design and children - IDC '14. ACM Press, New York, New York, USA, 213--216.
[44]
Louis G. Tassinary and John T. Cacioppo. 1992. Unobservable Facial Actions and Emotion. Psychological Science 3, 1 (1992), 28--33.
[45]
Pascal Thibault, Manon Levesque, Pierre Gosselin, and Ursula Hess. 2012. The Duchenne marker is not a universal signal of smile authenticity - but it can be learned! Social Psychology 43, 4 (2012), 215--221. arXiv:arXiv:1011.1669v3
[46]
Anton van Boxtel. 2010. Facial EMG as a tool for inferring affective states. In Proceedings of Measuring Behavior, AJ Spink, F Grieco, Krips OE, LWS Loijens, LPJJ Noldus, and PH Zimmerman (Eds.). Eindhoven, 104--108.
[47]
Alessandro Vinciarelli, Maja Pantic, and Herve Bourlard. 2009. Social signal processing: survey of an emerging domain. Image and Vision Computing 27, 12 (2009), 1743--1759.
[48]
Shangfei Wang, Chongliang Wu, and Qiang Ji. 2016. Capturing global spatial patterns for distinguishing posed and spontaneous expressions. Computer Vision and Image Understanding 147 (jun 2016), 69--76.
[49]
Jiajia Yang and Shangfei Wang. 2017. Capturing spatial and temporal patterns for distinguishing between posed and spontaneous expressions. In Proceedings of the 2017 ACM on Multimedia Conference - MM '17. ACM Press, New York, New York, USA, 469--477.
[50]
Mircea Zloteanu, Eva G. Krumhuber, and Daniel C. Richardson. 2018. Detecting genuine and deliberate displays of surprise in static and dynamic faces. Frontiers in Psychology 9 (2018), 1184.

Cited By

View all
  • (2024)Establishing the validity and robustness of facial electromyography measures for political sciencePolitics and the Life Sciences10.1017/pls.2023.26(1-18)Online publication date: 15-Jan-2024
  • (2023)Emotion AI at Work: Implications for Workplace Surveillance, Emotional Labor, and Emotional PrivacyProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580950(1-20)Online publication date: 19-Apr-2023
  • (2023)Survey on Emotion Sensing Using Mobile DevicesIEEE Transactions on Affective Computing10.1109/TAFFC.2022.322048414:4(2678-2696)Online publication date: 1-Oct-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
May 2019
9077 pages
ISBN:9781450359702
DOI:10.1145/3290605
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 May 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. computer vision
  2. electromyography
  3. facial expression recognition

Qualifiers

  • Research-article

Conference

CHI '19
Sponsor:

Acceptance Rates

CHI '19 Paper Acceptance Rate 703 of 2,958 submissions, 24%;
Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)49
  • Downloads (Last 6 weeks)11
Reflects downloads up to 11 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Establishing the validity and robustness of facial electromyography measures for political sciencePolitics and the Life Sciences10.1017/pls.2023.26(1-18)Online publication date: 15-Jan-2024
  • (2023)Emotion AI at Work: Implications for Workplace Surveillance, Emotional Labor, and Emotional PrivacyProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580950(1-20)Online publication date: 19-Apr-2023
  • (2023)Survey on Emotion Sensing Using Mobile DevicesIEEE Transactions on Affective Computing10.1109/TAFFC.2022.322048414:4(2678-2696)Online publication date: 1-Oct-2023
  • (2023)Multi-Modal Emotion Recognition for Enhanced Requirements Engineering: A Novel Approach2023 IEEE 31st International Requirements Engineering Conference (RE)10.1109/RE57278.2023.00039(299-304)Online publication date: Sep-2023
  • (2023)InMyFaceInformation Fusion10.1016/j.inffus.2023.10188699:COnline publication date: 1-Nov-2023
  • (2022)Facial Micro-Expression Recognition Based on Deep Local-Holistic NetworkApplied Sciences10.3390/app1209464312:9(4643)Online publication date: 5-May-2022
  • (2022)Similarities and disparities between visual analysis and high-resolution electromyography of facial expressionsPLOS ONE10.1371/journal.pone.026228617:2(e0262286)Online publication date: 22-Feb-2022
  • (2022)MyoTac: Real-Time Recognition of Tactical Sign Language Based on Lightweight Deep Neural NetworkWireless Communications and Mobile Computing10.1155/2022/27744302022(1-17)Online publication date: 25-Mar-2022
  • (2022)TroiProceedings of the ACM on Human-Computer Interaction10.1145/35467386:MHCI(1-22)Online publication date: 20-Sep-2022
  • (2022)Imprecise but FunProceedings of the ACM on Human-Computer Interaction10.1145/35467256:MHCI(1-21)Online publication date: 20-Sep-2022
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media