Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2522848.2522865acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
poster

A gaze-based method for relating group involvement to individual engagement in multimodal multiparty dialogue

Published: 09 December 2013 Publication History

Abstract

This paper is concerned with modelling individual engagement and group involvement as well as their relationship in an eight-party, mutimodal corpus. We propose a number of features (presence, entropy, symmetry and maxgaze) that summarise different aspects of eye-gaze patterns and allow us to describe individual as well as group behaviour in time. We use these features to define similarities between the subjects and we compare this information with the engagement rankings the subjects expressed at the end of each interactions about themselves and the other participants. We analyse how these features relate to four classes of group involvement and we build a classifier that is able to distinguish between those classes with 71\% of accuracy.

References

[1]
R. Bednarik and M. Hradis. Gaze and conversational engagement in multimparty video conversation: An annotation scheme and classification of high and low levels of engagement. In 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction, 2005.
[2]
D. Bohus and E. Horvitz. Models for multiparty engagement in open-world dialog. In Sigdial 2009, 2009.
[3]
F. Bonin, R. Bock, and N. Campbell. How do we react to context? annotation of individual and group engagement in a video corpus. In International Conference on Social Computing, pages 899--903, 2012.
[4]
B. Fuglede and F. Topsoe. Jensen-shannon divergence and hilbert space embedding. In Proc. Int. Symp. on Information Theory, 2004.
[5]
D. Gatica-Perez, I. McCowan, D. Zhang, and S. Bengio. Detecting group interest-level in meetings. In ICASSP 2005, pages 489--492, 2005.
[6]
R. Ishii and Y. I. Nakano. An empirical study of eye-gaze behaviors: Towards the estimation of conversational engagement in human-agent communication. In 2010 workshop on Eye gaze in intelligent human machine interaction EGIHMI, pages 33--40, 2010.
[7]
S. Kullback and R. Leibler. On information and sufficiency. Annals of Mathematical Statistics, 22(1):79--86, 1951.
[8]
A. Levitski, J. Radun, and K. Jokinen. Visual interaction and conversational activity. In Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye Gaze and Multimodality, 2012.
[9]
I. McCowan, D. Garcia-Perez, S. Bengio, G. Lathoud, M. Barnard, and D. Zhang. Automatic analysis of multimodal group actions in meetings. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 27(3):305--317, 2005.
[10]
C. Oertel, F. Cummins, J. Edlund, P. Wagner, and N. Campbell. D64: a corpus of richly recorded conversational interaction. Journal on Multimodal User Interfaces, 7:19--28, 2013.
[11]
C. Oertel, C. de Looze, A. Windmann, P. Wagner, and N. Campbell. Towards the automatic detection of involvement in conversation. In Analysis of Verbal and Nonverbal Communication and Enactment: The Processing Issues, Lecture Notes in Computer Science, 2011.
[12]
C. Oertel, G. Salvi, J. Gotze, J. Edlund, J. Gustafson, and M. Heldner. The kth games corpora: How to catch a werewolf. In Multimodal-Corpora:Beyond Audio and Gaze, 2013.
[13]
C. Oertel, S. Scherer, and N. Campbell. On the use of multimodal cues for the prediction of degrees of involvement in spontaneous conversation. In Interspeech 2011, pages 1541--1544, 2011.
[14]
H. Salamin, S. Favre, and A. Vinciarelli. Automatic role recognition in multiparty recordings: Using social affiliation networks for feature extraction. IEEE Transactions on Multimedia, 11(7):1373--1380, 2009.
[15]
newblock G. Skantze and J. Gustafson. Attention and interaction control in a human- human-computer dialogue setting. In Sigdial 2009, pages 310--313, 2009.
[16]
W. Y. Wang and J. Hirschberg. Detecting levels of interest from spoken dialog with multistream prediction feedback and similarity based hierarchical fusion learning. In Sigdial 2011, pages 152--161, 2011.
[17]
B. Wrede and E. Shriberg. The relationship between dialogue acts and hot spots in meetingss. In ASRU, pages 180--185, December 2003.
[18]
B. Wrede and E. Shriberg. Spotting hotspots in meetings: Human judgments and prosodic cues. In Eurospeech 2003, pages 2805--2808, 2003.

Cited By

View all
  • (2024)Toward a Quality Model for Hybrid Intelligence TeamsProceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems10.5555/3635637.3662893(434-443)Online publication date: 6-May-2024
  • (2024)Participation Role-Driven Engagement Estimation of ASD Individuals in Neurodiverse Group DiscussionsProceedings of the 26th International Conference on Multimodal Interaction10.1145/3678957.3685721(556-564)Online publication date: 4-Nov-2024
  • (2024)Enhancing Online Meeting Experience through Shared Gaze-AttentionExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3651068(1-6)Online publication date: 11-May-2024
  • Show More Cited By

Index Terms

  1. A gaze-based method for relating group involvement to individual engagement in multimodal multiparty dialogue

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interaction
    December 2013
    630 pages
    ISBN:9781450321297
    DOI:10.1145/2522848
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 09 December 2013

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. classification
    2. engagement
    3. gaze
    4. interaction
    5. involvement

    Qualifiers

    • Poster

    Conference

    ICMI '13
    Sponsor:

    Acceptance Rates

    ICMI '13 Paper Acceptance Rate 49 of 133 submissions, 37%;
    Overall Acceptance Rate 453 of 1,080 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)28
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 12 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Toward a Quality Model for Hybrid Intelligence TeamsProceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems10.5555/3635637.3662893(434-443)Online publication date: 6-May-2024
    • (2024)Participation Role-Driven Engagement Estimation of ASD Individuals in Neurodiverse Group DiscussionsProceedings of the 26th International Conference on Multimodal Interaction10.1145/3678957.3685721(556-564)Online publication date: 4-Nov-2024
    • (2024)Enhancing Online Meeting Experience through Shared Gaze-AttentionExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3651068(1-6)Online publication date: 11-May-2024
    • (2024)Automatic Context-Aware Inference of Engagement in HMI: A SurveyIEEE Transactions on Affective Computing10.1109/TAFFC.2023.327870715:2(445-464)Online publication date: Apr-2024
    • (2024)Multimodal Design for Interactive Collaborative Problem-Solving SupportHuman Interface and the Management of Information10.1007/978-3-031-60107-1_6(60-80)Online publication date: 29-Jun-2024
    • (2023)Estimating and Visualizing Persuasiveness of Participants in Group DiscussionsJournal of Information Processing10.2197/ipsjjip.31.3431(34-44)Online publication date: 2023
    • (2023)Co-Located Human–Human Interaction Analysis Using Nonverbal Cues: A SurveyACM Computing Surveys10.1145/362651656:5(1-41)Online publication date: 25-Nov-2023
    • (2023)MultiMediate '23: Engagement Estimation and Bodily Behaviour Recognition in Social InteractionsProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3613851(9640-9645)Online publication date: 26-Oct-2023
    • (2023)DCTM: Dilated Convolutional Transformer Model for Multimodal Engagement Estimation in ConversationProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612857(9521-9525)Online publication date: 26-Oct-2023
    • (2023)Sliding Window Seq2seq Modeling for Engagement EstimationProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612852(9496-9500)Online publication date: 26-Oct-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media