Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3172944.3172969acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article

Detecting Low Rapport During Natural Interactions in Small Groups from Non-Verbal Behaviour

Published: 05 March 2018 Publication History

Abstract

Rapport, the close and harmonious relationship in which interaction partners are "in sync" with each other, was shown to result in smoother social interactions, improved collaboration, and improved interpersonal outcomes. In this work, we are first to investigate automatic prediction of low rapport during natural interactions within small groups. This task is challenging given that rapport only manifests in subtle non-verbal signals that are, in addition, subject to influences of group dynamics as well as inter-personal idiosyncrasies. We record videos of unscripted discussions of three to four people using a multi-view camera system and microphones. We analyse a rich set of non-verbal signals for rapport detection, namely facial expressions, hand motion, gaze, speaker turns, and speech prosody. Using facial features, we can detect low rapport with an average precision of 0.7 (chance level at 0.25), while incorporating prior knowledge of participants' personalities can even achieve early prediction without a drop in performance. We further provide a detailed analysis of different feature sets and the amount of information contained in different temporal segments of the interactions.

References

[1]
Madeline Balaam, Geraldine Fitzpatrick, Judith Good, and Eric Harris. 2011. Enhancing Interactional Synchrony with an Ambient Display. In Proc. of the ACM Conference on Human Factors in Computing Systems. 867--876.
[2]
Tadas Baltruvsaitis, Peter Robinson, and Louis-Philippe Morency. 2016. OpenFace: an open source facial behavior analysis toolkit. In Proc. of the IEEE Winter Conference on Applications of Computer Vision. 1--10.
[3]
Frank J Bernieri. 1988. Coordinated movement and rapport in teacher-student interactions. Journal of Nonverbal Behavior 12, 2 (1988), 120--138.
[4]
Frank J Bernieri, John S Gillis, Janet M Davis, and Jon E Grahe. 1996. Dyad rapport and the accuracy of its judgment across situations: A lens model analysis. Journal of Personality and Social Psychology 71, 1 (1996), 110--129.
[5]
Cigdem Beyan, Francesca Capozzi, Cristina Becchio, and Vittorio Murino. 2017. Prediction of the Leadership Style of an Emergent Leader Using Audio and Visual Nonverbal Features. IEEE Transactions on Multimedia (2017).
[6]
Dan Bohus and Eric Horvitz. 2011. Multiparty Turn Taking in Situated Dialog: Study, Lessons, and Directions. In Proc. of the Annual SIGdial Meeting on Discourse and Dialogue. 98--109.
[7]
Michael Burns. 1984. Rapport and relationships: The basis of child care. Journal of Child Care 2, 2 (1984), 47--57.
[8]
Angelo Cafaro, Johannes Wagner, Tobias Baur, Soumia Dermouche, Mercedes Torres Torres, Catherine Pelachaud, Elisabeth André, and Michel Valstar. 2017. The NoXi Database: Multimodal Recordings of Mediated Novice-expert Interactions. In Proc. of the ACM International Conference on Multimodal Interaction. 350--359.
[9]
Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2017. Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition. 7291--7299.
[10]
Aleksandra Cerekovic, Oya Aran, and Daniel Gatica-Perez. 2016. Rapport with Virtual Agents: What Do Human Social Cues and Personality Explain? IEEE Transactions on Affective Computing 8, 3 (2016), 382--395.
[11]
Prerna Chikersal, Maria Tomprou, Young Ji Kim, Anita Williams Woolley, and Laura Dabbish. 2017. Deep Structures of Collaboration: Physiological Correlates of Collective Intelligence and Group Satisfaction. In Proc. of the International Conference on Computer-Supported Collaborative Work. 873--888.
[12]
Wynne W Chin, Wm David Salisbury, Allison W Pearson, and Matthew J Stollak. 1999. Perceived Cohesion in Small Groups: Adapting and Testing the Perceived Cohesion Scale in a Small-Group Setting. Small group research 30, 6 (1999), 751--766.
[13]
Paul T Costa and Robert R MacCrae. 1992. Revised NEO personality inventory (NEO PI-R) and NEO five-factor inventory (NEO-FFI): Professional manual.
[14]
Ionut Damian, Chiew Seng Sean Tan, Tobias Baur, Johannes Schöning, Kris Luyten, and Elisabeth André. 2015. Augmenting Social Interactions: Realtime Behavioural Feedback using Social Signal Processing Techniques. In Proc. of the ACM Conference on Human Factors in Computing Systems. 565--574.
[15]
Yves-Alexandre de Montjoye, Jordi Quoidbach, Florent Robic, and Alex Pentland. 2013. Predicting Personality Using Novel Mobile Phone-Based Metrics. In Proc. of the International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction. 48--55.
[16]
Florian Eyben, Felix Weninger, Florian Gross, and Björn Schuller. 2013. Recent Developments in openSMILE, the Munich Open-Source Multimedia Feature Extractor. In Proc. of the ACM International Conference on Multimedia. 835--838.
[17]
Sebastian Feese, Amir Muaremi, Bert Arnrich, Gerhard Troster, Bertolt Meyer, and Klaus Jonas. 2011. Discriminating Individually Considerate and Authoritarian Leaders by Speech Activity Cues. In Proc. of the IEEE International Conference on Privacy, Security, Risk and Trust and IEEE International Conference on Social Computing. 1460--1465.
[18]
Daniel Gatica-Perez, L McCowan, Dong Zhang, and Samy Bengio. 2005. Detecting Group Interest-Level in Meetings. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 1. 489--492.
[19]
Mehdi Ghayoumi and Arvind K Bansal. 2016. Unifying Geometric Features and Facial Action Units for Improved Performance of Facial Expression Analysis. arXiv preprint arXiv:1606.00822 (2016).
[20]
Juan Lorenzo Hagad, Roberto Legaspi, Masayuki Numao, and Merlin Suarez. 2011. Predicting Levels of Rapport in Dyadic Interactions through Automatic Detection of Posture and Posture Congruence. In Proc. of the IEEE International Conference on Social Computing. 613--616.
[21]
Jinni A Harrigan, Thomas E Oxman, and Robert Rosenthal. 1985. Rapport expressed through nonverbal behavior. Journal of Nonverbal Behavior 9, 2 (1985), 95--110.
[22]
Sabrina Hoppe, Tobias Loetscher, Stephanie Morey, and Andreas Bulling. 2015. Recognition of Curiosity Using Eye Movement Analysis. In Adj. Proc. ACM International Joint Conference on Pervasive and Ubiquitous Computing. 185--188.
[23]
Michael Xuelin Huang, Jiajia Li, Grace Ngai, and Hong Va Leong. 2016. StressClick: Sensing Stress from Gaze-Click Patterns. In Proc. of the ACM Conference on Multimedia. 1395--1404.
[24]
Hayley Hung and Daniel Gatica-Perez. 2010. Estimating Cohesion in Small Groups Using Audio-Visual Nonverbal Behavior. IEEE Transactions on Multimedia 12, 6 (2010), 563--575.
[25]
Carroll E Izard. 1990. Personality, Emotion Expressions, and Rapport. Psychological Inquiry 1, 4 (1990), 315--317.
[26]
Natasha Jaques, Yoo Lim Kim, and Rosalind Picard. 2016. Personality, Attitudes, and Bonding in Conversations. In Proc. of the International Conference on Intelligent Virtual Agents. 378--382.
[27]
Natasha Jaques, Daniel McDuff, Yoo Lim Kim, and Rosalind Picard. 2016. Understanding and Predicting Bonding in Conversations Using Thin Slices of Facial Expressions and Body Language. In Proc. of the International Conference on Intelligent Virtual Agents. 64--74.
[28]
John M Kelley, Gordon Kraft-Todd, Lidia Schapira, Joe Kossowsky, and Helen Riess. 2014. The Influence of the Patient-Clinician Relationship on Healthcare Outcomes: A Systematic Review and Meta-Analysis of Randomized Controlled Trials. PloS one 9, 4 (2014), e94207.
[29]
Marianne LaFrance and Maida Broadbent. 1976. Group Rapport: Posture Sharing as a Nonverbal Indicator. Group & Organization Studies 1, 3 (1976), 328--333.
[30]
Kornel Laskowski. 2010. Modeling Norms of Turn-Taking in Multi-Party Conversation. In Proc. of the 48th Annual Meeting of the Association for Computational Linguistics. 999--1008.
[31]
Paul J Lavrakas. 2008. Encyclopedia of survey research methods.
[32]
Gary McKeown, William Curran, Johannes Wagner, Florian Lingenfelser, and Elisabeth André. 2015. The Belfast storytelling database: A spontaneous social interaction database with laughter focused annotation. In Proc. of the International Conference on Affective Computing and Intelligent Interaction. 166--172.
[33]
Philipp M Müller, Sikandar Amin, Prateek Verma, Mykhaylo Andriluka, and Andreas Bulling. 2015. Emotion recognition from embedded bodily expressions and speech during dyadic interactions. In Proc. of the International Conference on Affective Computing and Intelligent Interaction. 663--669.
[34]
Marjolein Nanninga, Yanxia Zhang, Nale Lehmann-Willenbrock, Zoltán Szlávik, and Hayley Hung. 2017. Estimating Verbal Expressions of Task and Social Cohesion in Meetings by Quantifying Paralinguistic Mimicry. In Proc. of the ACM International Conference on Multimedia Interaction. 206--215.
[35]
Catharine Oertel, Kenneth A Funes Mora, Joakim Gustafson, and Jean-Marc Odobez. 2015. Deciphering the Silent Participant: On the Use of Audio-Visual Cues for the Classification of Listener Categories in Group Discussions. In Proc. of the ACM on International Conference on Multimodal Interaction. 107--114.
[36]
Catharine Oertel and Giampiero Salvi. 2013. A Gaze-based Method for Relating Group Involvement to Individual Engagement in Multimodal Multiparty Dialogue. In Proc. of the ACM International Conference on Multimodal Interaction. 99--106.
[37]
Amy Ogan, Samantha L Finkelstein, Erin Walker, Ryan Carlson, and Justine Cassell. 2012. Rudeness and Rapport: Insults and Learning Gains in Peer Tutoring. In Proc. of the International Conference on Intelligent Tutoring Systems. 11--21.
[38]
Fabien Ringeval, Andreas Sonderegger, Juergen Sauer, and Denis Lalanne. 2013. Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions. In Proc. of the IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). 1--8.
[39]
Christoph Rühlemann and Stefan Gries. 2015. Turn order and turn distribution in multi-party storytelling. Journal of Pragmatics 87 (2015), 171--191.
[40]
Hiroaki Sakoe and Seibi Chiba. 1978. Dynamic Programming Algorithm Optimization for Spoken Word Recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing 26, 1 (1978), 43--49.
[41]
Dairazalia Sanchez-Cortes, Oya Aran, Marianne Schmid Mast, and Daniel Gatica-Perez. 2012. A Nonverbal Behavior Approach to Identify Emergent Leaders in Small Groups. IEEE Transactions on Multimedia 14, 3 (2012), 816--832.
[42]
Gianluca Schiavo, Alessandro Cappelletti, Eleonora Mencarini, Oliviero Stock, and Massimo Zancanaro. 2014. Overt or Subtle? Supporting Group Conversations with Automatically Targeted Directives. In Proc. of the ACM International Conference on Intelligent User Interfaces. 225--234.
[43]
Linda Tickle-Degnen and Robert Rosenthal. 1990. The Nature of Rapport and Its Nonverbal Correlates. Psychological Inquiry 1, 4 (1990), 285--293.
[44]
Philip Tsui and Gail L Schultz. 1985. Failure of rapport: Why psychotherapeutic engagement fails in the treatment of Asian clients. American Journal of Orthopsychiatry 55, 4 (1985), 561--569.
[45]
Alessandro Vinciarelli and Gelareh Mohammadi. 2014. A Survey of Personality Computing. IEEE Transactions on Affective Computing 5, 3 (2014), 273--291.
[46]
Ning Wang and Jonathan Gratch. 2009. Rapport and Facial Expression. In Proc. of the International Conference on Affective Computing and Intelligent Interaction Workshops. 1--6.
[47]
Ran Zhao, Alexandros Papangelis, and Justine Cassell. 2014. Towards a Dyadic Computational Model of Rapport Management for Human-Virtual Agent Interaction. In Proc. of the International Conference on Intelligent Virtual Agents. 514--527.
[48]
Ran Zhao, Tanmay Sinha, Alan W Black, and Justine Cassell. 2016. Socially-Aware Virtual Agents: Automatically Assessing Dyadic Rapport from Temporal Patterns of Behavior. In Proc. of the International Conference on Intelligent Virtual Agents. 218--233.

Cited By

View all
  • (2024)Detecting Leadership Opportunities in Group Discussions Using Off-the-Shelf VR HeadsetsSensors10.3390/s2408253424:8(2534)Online publication date: 15-Apr-2024
  • (2024)MultiMediate'24: Multi-Domain Engagement EstimationProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3689004(11377-11382)Online publication date: 28-Oct-2024
  • (2024)DAT: Dialogue-Aware Transformer with Modality-Group Fusion for Human Engagement EstimationProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3688988(11397-11403)Online publication date: 28-Oct-2024
  • Show More Cited By

Index Terms

  1. Detecting Low Rapport During Natural Interactions in Small Groups from Non-Verbal Behaviour

                          Recommendations

                          Comments

                          Information & Contributors

                          Information

                          Published In

                          cover image ACM Conferences
                          IUI '18: Proceedings of the 23rd International Conference on Intelligent User Interfaces
                          March 2018
                          698 pages
                          ISBN:9781450349451
                          DOI:10.1145/3172944
                          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

                          Sponsors

                          In-Cooperation

                          Publisher

                          Association for Computing Machinery

                          New York, NY, United States

                          Publication History

                          Published: 05 March 2018

                          Permissions

                          Request permissions for this article.

                          Check for updates

                          Author Tags

                          1. affective computing
                          2. body posture
                          3. dominance
                          4. facial expressions
                          5. leadership
                          6. personality traits
                          7. social signal processing
                          8. speech prosody

                          Qualifiers

                          • Research-article

                          Funding Sources

                          • JST CREST

                          Conference

                          IUI'18
                          Sponsor:

                          Acceptance Rates

                          IUI '18 Paper Acceptance Rate 43 of 299 submissions, 14%;
                          Overall Acceptance Rate 746 of 2,811 submissions, 27%

                          Upcoming Conference

                          IUI '25

                          Contributors

                          Other Metrics

                          Bibliometrics & Citations

                          Bibliometrics

                          Article Metrics

                          • Downloads (Last 12 months)159
                          • Downloads (Last 6 weeks)11
                          Reflects downloads up to 23 Dec 2024

                          Other Metrics

                          Citations

                          Cited By

                          View all
                          • (2024)Detecting Leadership Opportunities in Group Discussions Using Off-the-Shelf VR HeadsetsSensors10.3390/s2408253424:8(2534)Online publication date: 15-Apr-2024
                          • (2024)MultiMediate'24: Multi-Domain Engagement EstimationProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3689004(11377-11382)Online publication date: 28-Oct-2024
                          • (2024)DAT: Dialogue-Aware Transformer with Modality-Group Fusion for Human Engagement EstimationProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3688988(11397-11403)Online publication date: 28-Oct-2024
                          • (2024)Less is More: Adaptive Feature Selection and Fusion for Eye Contact DetectionProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3688987(11390-11396)Online publication date: 28-Oct-2024
                          • (2024)Towards Engagement Prediction: A Cross-Modality Dual-Pipeline Approach using Visual and Audio FeaturesProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3688986(11383-11389)Online publication date: 28-Oct-2024
                          • (2024)Interactions for Socially Shared Regulation in Collaborative Learning: An Interdisciplinary Multimodal DatasetACM Transactions on Interactive Intelligent Systems10.1145/365837614:3(1-34)Online publication date: 22-Apr-2024
                          • (2024)The CoExplorer Technology Probe: A Generative AI-Powered Adaptive Interface to Support Intentionality in Planning and Running Video MeetingsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661507(1638-1657)Online publication date: 1-Jul-2024
                          • (2024)Exploring Multimodal Nonverbal Functional Features for Predicting the Subjective Impressions of InterlocutorsIEEE Access10.1109/ACCESS.2024.342653712(96769-96782)Online publication date: 2024
                          • (2024)FT Xtraction: Feature extraction and visualization of conversational video data for social and emotional analysisSoftwareX10.1016/j.softx.2024.10182727(101827)Online publication date: Sep-2024
                          • (2024)Improving collaborative problem-solving skills via automated feedback and scaffolding: a quasi-experimental study with CPSCoach 2.0User Modeling and User-Adapted Interaction10.1007/s11257-023-09387-634:4(1087-1125)Online publication date: 14-Feb-2024
                          • Show More Cited By

                          View Options

                          Login options

                          View options

                          PDF

                          View or Download as a PDF file.

                          PDF

                          eReader

                          View online with eReader.

                          eReader

                          Media

                          Figures

                          Other

                          Tables

                          Share

                          Share

                          Share this Publication link

                          Share on social media