Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3304109.3325816acmconferencesArticle/Chapter ViewAbstractPublication PagesmmsysConference Proceedingsconference-collections
short-paper
Public Access

The unobtrusive group interaction (UGI) corpus

Published: 18 June 2019 Publication History

Abstract

Studying group dynamics requires fine-grained spatial and temporal understanding of human behavior. Social psychologists studying human interaction patterns in face-to-face group meetings often find themselves struggling with huge volumes of data that require many hours of tedious manual coding. There are only a few publicly available multi-modal datasets of face-to-face group meetings that enable the development of automated methods to study verbal and non-verbal human behavior. In this paper, we present a new, publicly available multi-modal dataset for group dynamics study that differs from previous datasets in its use of ceiling-mounted, unobtrusive depth sensors. These can be used for fine-grained analysis of head and body pose and gestures, without any concerns about participants' privacy or inhibited behavior. The dataset is complemented by synchronized and time-stamped meeting transcripts that allow analysis of spoken content. The dataset comprises 22 group meetings in which participants perform a standard collaborative group task designed to measure leadership and productivity. Participants' post-task questionnaires, including demographic information, are also provided as part of the dataset. We show the utility of the dataset in analyzing perceived leadership, contribution, and performance, by presenting results of multi-modal analysis using our sensor-fusion algorithms designed to automatically understand audio-visual interactions.

References

[1]
Audacity. 2017. Audacity. http://www.audacityteam.org/. {Online; accessed 25-July-2018}.
[2]
Frank J Bernieri, Janet M Davis, Robert Rosenthal, and C Raymond Knee. 1994. Interactional synchrony and rapport: Measuring synchrony in displays devoid of sound and facial affect. Personality and Social Psychology Bulletin 20, 3 (1994), 303--311.
[3]
Cigdem Beyan, Francesca Capozzi, Cristina Becchio, and Vittorio Murino. 2017. Multi-task learning of social psychology assessments and nonverbal features for automatic leadership identification. In Proc. 19th Int. Conf. Multimodal Interaction. ACM.
[4]
Cigdem Beyan, Nicolò Carissimi, Francesca Capozzi, Sebastiano Vascon, Matteo Bustreo, Antonio Pierro, Cristina Becchio, and Vittorio Murino. 2016. Detecting emergent leader in a meeting environment using nonverbal visual features only. In Proc. 18th ACM Int. Conf. Multimodal Interaction. ACM.
[5]
Indrani Bhattacharya, Michael Foley, Ni Zhang, Tongtao Zhang, Christine Ku, Cameron Mine, Heng Ji, Christoph Riedl, Brooke Foucault Welles, and Richard J Radke. 2018. A Multimodal-Sensor-Enabled Room for Unobtrusive Group Meeting Analysis. In Proceedings of the 2018 on International Conference on Multimodal Interaction. ACM, 347--355.
[6]
Konstantinos Bousmalis, Marc Mehu, and Maja Pantic. 2013. Towards the automatic detection of spontaneous agreement and disagreement based on nonverbal behaviour: A survey of related cues, databases, and tools. Image and Vision Computing 31, 2 (2013), 203--221.
[7]
Mckenzie Braley and Gabriel Murray. 2018. The Group Affect and Performance (GAP) Corpus. Proceedings of Group Interaction Frontiers in Technology (GIFT 2018). ACM (2018).
[8]
Susanne Burger, Victoria MacLaren, and Hua Yu. 2002. The ISL meeting corpus: The impact of meeting type on speech style. In INTERSPEECH. Denver, CO.
[9]
Nick Campbell, Toshiyuki Sadanobu, Masataka Imura, Naoto Iwahashi, Suzuki Noriko, and Damien Douxchamps. 2006. A multimedia database of meetings and informal interactions for tracking participant involvement and discourse flow. In Proc. Int. Conf. Lang. Resources Evaluation. Genoa, Italy.
[10]
Tanzeem Choudhury and Alex Pentland. 2003. Sensing and Modeling Human Networks using the Sociometer. In Proceedings of the 7th IEEE International Symposium on Wearable Computers. IEEE Computer Society, 216.
[11]
Matthew A Cronin, Laurie R Weingart, and Gergana Todorova. 2011. Dynamics in groups: Are we there yet? Academy of Management Annals 5, 1 (2011), 571--612.
[12]
Daniel Gatica-Perez. 2009. Automatic Nonverbal Analysis of Social Interaction in Small Groups: A Review. Image Vision Comput. 27, 12 (2009), 1775--1787.
[13]
Theodoros Giannakopoulos and Aggelos Pikrakis. 2014. Introduction to Audio Analysis: A MATLAB® Approach. Academic Press.
[14]
Jay Hall and Wilfred Harvey Watson. 1970. The effects of a normative intervention on group decision-making performance. Human Relations 23, 4 (1970), 299--317.
[15]
Hayley Hung and Daniel Gatica-Perez. 2010. Estimating cohesion in small groups using audio-visual nonverbal behavior. IEEE Trans. Multimedia 12, 6 (2010), 563--575.
[16]
Hayley Hung, Yan Huang, Gerald Friedland, and Daniel Gatica-Perez. 2011. Estimating dominance in multi-party meetings using speaker diarization. IEEE Trans. Audio, Speech, and Language Process. 19, 4 (2011), 847--860.
[17]
A. Janin, D. Baron, J. Edwards, D. Ellis, D. Gelbart, N. Morgan, B. Peskin, T. Pfau, E. Shriberg, A. Stolcke, and C. Wooters. 2003. The ICSI meeting corpus. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03)., Vol. 1. I--I
[18]
Dineshbabu Jayagopi, Dairazalia Sanchez-Cortes, Kazuhiro Otsuka, Junji Yamato, and Daniel Gatica-Perez. 2012. Linking speaking and looking behavior patterns with group composition, perception, and performance. In Proc. 14th ACM Int. Conf. Multimodal Interaction. ACM.
[19]
David W Johnson and Frank P Johnson. 1991. Joining together: Group theory and group skills. Prentice-Hall, Inc.
[20]
Natasa Jovanovic, Rieks op den Akker, and Anton Nijholt. 2006. A corpus for studying addressing behaviour in multi-party dialogues. Language Resources and Evaluation 40, 1 (2006), 5--23.
[21]
Jill Kickul and George Neuman. 2000. Emergent leadership behaviors: The function of personality and cognitive ability in determining teamwork performance and KSAs. Journal of Business and Psychology 15, 1 (2000), 27--51.
[22]
Taemie Kim, Erin McFee, Daniel Olguin Olguin, Ben Waber, and Alex Pentland. 2012. Sociometric badges: Using sensor technology to capture new forms of collaboration. Journal of Organizational Behavior 33, 3 (2012), 412--427.
[23]
Steve WJ Kozlowski. 2015. Advancing research on team process dynamics: Theoretical, methodological, and measurement considerations. Organizational Psychology Review 5, 4 (2015), 270--299.
[24]
Roger Th. A.J. Leenders, Noshir S Contractor, and Leslie A DeChurch. 2016. Once upon a time: Understanding team processes as relational event networks. Organizational Psychology Review 6, 1 (2016), 92--115.
[25]
Nale Lehmann-Willenbrock, Hayley Hung, and Joann Keyton. 2017. New frontiers in analyzing dynamic group interactions: Bridging social and computer science. Small Group Research 48, 5 (2017), 519--531.
[26]
Kurt Lewin. 1943. Psychology and the process of group living. The Journal of Social Psychology 17, 1 (1943), 113--131.
[27]
Kurt Lewin. 1946. Action research and minority problems. Journal of social issues 2, 4 (1946), 34--46.
[28]
Robert G Lord. 1977. Functional leadership behavior: Measurement and relation to social power and leadership perceptions. Administ. Sci. Quart. (1977), 114--133.
[29]
Nadia Mana, Bruno Lepri, Paul Chippendale, Alessandro Cappelletti, Fabio Pianesi, Piergiorgio Svaizer, and Massimo Zancanaro. 2007. Multimodal corpus of multi-party meetings for automatic social behavior analysis and personality traits detection. In Proceedings of the 2007 workshop on Tagging, mining and retrieval of human related activity information. ACM, 9--14.
[30]
Wenxuan Mou, Hatice Gunes, and Ioannis Patras. 2016. Alone versus in-a-group: A comparative analysis of facial affect recognition. In Proc. ACM Multimedia Conf. ACM, 521--525.
[31]
Philipp Müller, Michael Xuelin Huang, and Andreas Bulling. 2018. Detecting low rapport during natural interactions in small groups from non-Verbal behaviour. arXiv preprint arXiv:1801.06055 ( 2018).
[32]
Gabriel Murray, Hayley Hung, Joann Keyton, Catherine Lai, Nale Lehmann-Willenbrock, and Catharine Oertel. 2018. Group Interaction Frontiers in Technology. In Proceedings of the 2018 on International Conference on Multimodal Interaction. ACM, 660--662.
[33]
Catharine Oertel, Kenneth A Funes Mora, Samira Sheikhi, Jean-Marc Odobez, and Joakim Gustafson. 2014. Who will get the grant?: A multimodal corpus for the analysis of conversational behaviours in group interviews. In Proc. Workshop Understanding Modeling Multiparty, Multimodal Interactions. ACM.
[34]
Daniel Olguin Olguin and Alex Sandy Pentland. 2007. Sociometric badges: State of the art and future applications. In IEEE 11th Int. Symp. Wearable Comput. Boston, MA.
[35]
Kazuhiro Otsuka, Yoshinao Takemae, and Junji Yamato. 2005. A probabilistic inference of multiparty-conversation structure based on Markov-switching models of gaze patterns, head directions, and utterances. In Proc. 7th Int. Conf. Multimodal Interfaces. ACM.
[36]
Dairazalia Sanchez-Cortes, Oya Aran, and Daniel Gatica-Perez. 2011. An audio visual corpus for emergent leader analysis. In Workshop Multimodal Corpora Mach. Learning: Taking Stock and Road Mapping the Future. Alicante, Spain.
[37]
George Saon, Gakuto Kurata, Tom Sercu, Kartik Audhkhasi, Samuel Thomas, Dimitrios Dimitriadis, Xiaodong Cui, Bhuvana Ramabhadran, Michael Picheny, Lynn-Li Lim, Bergul Roomi, and Phil Hall. 2017. English Conversational Telephone Speech Recognition by Humans and Machines. In Proc. INTERSPEECH.
[38]
Stefan Scherer, Nadir Weibel, Louis-Philippe Morency, and Sharon Oviatt. 2012. Multimodal prediction of expertise and leadership in learning groups. In Int. Workshop Multimodal Learning Analytics.
[39]
Thomas J. L. van Rompay, Dorette J. Vonk, and Marieke L. Fransen. 2009. The eye of the camera: Effects of security cameras on prosocial behavior. Environment and Behavior 41, 1 (2009), 60--74.
[40]
Ni Zhang, Tongtao Zhang, Indrani Bhattacharya, Heng Ji, and Richard J. Radke. 2018. Visualizing Group Dynamics based on Multiparty Meeting Understanding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. 96--101.

Cited By

View all
  • (2023)Co-Located Human–Human Interaction Analysis Using Nonverbal Cues: A SurveyACM Computing Surveys10.1145/362651656:5(1-41)Online publication date: 25-Nov-2023
  • (2023)An Interaction-process-guided Framework for Small-group Performance PredictionACM Transactions on Multimedia Computing, Communications, and Applications10.1145/355876819:2(1-25)Online publication date: 6-Feb-2023
  • (2023)SoGrIn: a Non-Verbal Dataset of Social Group-Level Interactions2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)10.1109/RO-MAN57019.2023.10309351(2632-2637)Online publication date: 28-Aug-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MMSys '19: Proceedings of the 10th ACM Multimedia Systems Conference
June 2019
374 pages
ISBN:9781450362979
DOI:10.1145/3304109
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 June 2019

Permissions

Request permissions for this article.

Check for updates

Badges

Author Tags

  1. computational social psychology
  2. face-to-face group interactions
  3. multimodal dataset
  4. multimodal interaction
  5. multimodal sensing
  6. time-of-flight sensing

Qualifiers

  • Short-paper

Funding Sources

Conference

MMSys '19
Sponsor:
MMSys '19: 10th ACM Multimedia Systems Conference
June 18 - 21, 2019
Massachusetts, Amherst

Acceptance Rates

MMSys '19 Paper Acceptance Rate 40 of 82 submissions, 49%;
Overall Acceptance Rate 176 of 530 submissions, 33%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)76
  • Downloads (Last 6 weeks)16
Reflects downloads up to 30 Aug 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Co-Located Human–Human Interaction Analysis Using Nonverbal Cues: A SurveyACM Computing Surveys10.1145/362651656:5(1-41)Online publication date: 25-Nov-2023
  • (2023)An Interaction-process-guided Framework for Small-group Performance PredictionACM Transactions on Multimedia Computing, Communications, and Applications10.1145/355876819:2(1-25)Online publication date: 6-Feb-2023
  • (2023)SoGrIn: a Non-Verbal Dataset of Social Group-Level Interactions2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)10.1109/RO-MAN57019.2023.10309351(2632-2637)Online publication date: 28-Aug-2023
  • (2020)Group Performance Prediction with Limited ContextCompanion Publication of the 2020 International Conference on Multimodal Interaction10.1145/3395035.3425964(191-195)Online publication date: 25-Oct-2020
  • (2020)Predicting Performance Outcome with a Conversational Graph Convolutional Network for Small Group InteractionsICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP40776.2020.9053308(8044-8048)Online publication date: May-2020

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media