Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Implementing and Evaluating a Laughing Virtual Character

Published: 25 February 2017 Publication History
  • Get Citation Alerts
  • Abstract

    Laughter is a social signal capable of facilitating interaction in groups of people: it communicates interest, helps to improve creativity, and facilitates sociability. This article focuses on: endowing virtual characters with computational models of laughter synthesis, based on an expressivity-copying paradigm; evaluating how the physically co-presence of the laughing character impacts on the user’s perception of an audio stimulus and mood. We adopt music as a means to stimulate laughter. Results show that the character presence influences the user’s perception of music and mood. Expressivity-copying has an influence on the user’s perception of music, but does not have any significant impact on mood.

    Supplementary Material

    a3-mancini-apndx.pdf (mancini.zip)
    Supplemental movie, appendix, image and software files for, Implementing and Evaluating a Laughing Virtual Character

    References

    [1]
    V. Adelswärd. 1989. Laughter and dialogue: The social significance of laughter in institutional discourse. Nordic Journal of Linguistics 12, 02 (1989), 107--136.
    [2]
    J. N. Bailenson and N. Yee. 2005. Digital chameleons: Automatic assimilation of nonverbal gestures in immersive virtual environments. Psychological Science 16, 10 (2005), 814--819.
    [3]
    J. N. Bailenson, J. Blascovich, A. Beall, and J. M. Loomis. 2003. Interpersonal distance in immersive virtual environments. Personality and Social Psychology Bulletin 29, 7 (2003), 819--833.
    [4]
    R. Beale and C. Creed. 2009. Affective interaction: How emotional agents affect users. International Journal of Human-Computer Studies 67, 9 (2009), 755--776.
    [5]
    C. Becker-Asano and H. Ishiguro. 2009. Laughter in social robotics - no laughing matter. In Proceedings of the International Workshop on Social Intelligence Design. 287--300.
    [6]
    C. Becker-Asano, T. Kanda, C. Ishi, and H. Ishiguro. 2009. How about laughter? Perceived naturalness of two laughing humanoid robots. In Proceedings of the 3rd International Conference on Affective Computing and Intelligent Interaction. IEEE, 1--6.
    [7]
    P. Boersma and D. Weeninck. 2001. Praat, a system for doing phonetics by computer. Glot International 5, 9/10 (2001), 341--345.
    [8]
    C. Broekema. 2011. Motion intensity matching in interaction with a virtual agent. In Proceedings of the 14th Twente Student Conference on IT.
    [9]
    Antonio Camurri, Gualtiero Volpe, Stefano Piana, Maurizio Mancini, Radoslaw Niewiadomski, Nicola Ferrari, and Corrado Canepa. 2016. The dancer in the eye: Towards a multi-layered computational framework of qualities in movement. In Proceedings of the 3rd International Symposium on Movement and Computing (MOCO’16). ACM, New York, NY, Article 6, 7 pages.
    [10]
    G. Castellano, M. Mancini, C. Peters, and P. W. McOwan. 2012. Expressive copying behavior for social agents: A perceptual analysis. IEEE Transactions on Systems Man and Cybernetics-Part A-Systems and Humans 42, 3 (2012), 776.
    [11]
    A. Chapman. 1983. Humor and laughter in social interaction and some implications for humor research. In Handbook of Humor Research, Vol. 1, P. E. McGhee and J. H. Goldstein (Eds.). 135--157.
    [12]
    F. Charles, F. Pecune, G. Aranyi, C. Pelachaud, and M. Cavazza. 2015. ECA control using a single affective user dimension. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. ACM, 183--190.
    [13]
    T. L. Chartrand and J. A. Bargh. 1994. The chameleon effect: The perception-behavior link and social interaction. Journal of Personality and Social Psychology 76 (1994), 893--910.
    [14]
    D. DeVault, R. Artstein, G. Benn, T. Dey, E. Fast, A. Gainer, K. Georgila, J. Gratch, A. Hartholt, M. Lhommet, and others. 2014. SimSensei kiosk: A virtual human interviewer for healthcare decision support. In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 1061--1068.
    [15]
    Y. Ding, J. Huang, N. Fourati, T. Artières, and C. Pelachaud. 2014a. Upper body animation synthesis for a laughing character. In Intelligent Virtual Agents. Lecture Notes in Computer Science, Vol. 8637. 164--173.
    [16]
    Y. Ding and C. Pelachaud. 2015. Lip animation synthesis: A unified framework for speaking and laughing virtual agent. In Auditory-Visual Speech Processing (AVSP’15). 78--83.
    [17]
    Y. Ding, K. Prepin, J. Huang, C. Pelachaud, and T. Artières. 2014b. Laughter animation synthesis. In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems (AAMAS’14). 773--780.
    [18]
    R. I. M. Dunbar. 2008. Mind the gap: Or why humans are not just great apes. In Proceedings of the British Academy, Vol. 154.
    [19]
    D. P. French and S. Sutton. 2010. Reactivity of measurement in health psychology: How much of a problem is it? What can be done about it? British Journal of Health Psychology 15, 3 (2010), 453--468.
    [20]
    K. Grammer. 1990. Strangers meet: Laughter and nonverbal signs of interest in opposite-sex encounters. Journal of Nonverbal Behavior 14, 4 (1990), 209--236.
    [21]
    H. Griffin, M. Aung, B. Romera-Paredes, C. McLoughlin, G. McKeown, W. Curran, and N. Berthouze. 2015. Perception and automatic recognition of laughter from whole-body motion: Continuous and categorical perspectives. IEEE Transactions on Affective Computing PP, 99 (2015).
    [22]
    J. Hofmann, T. Platt, W. Ruch, R. Niewiadomski, and J. Urbain. 2015. The influence of a virtual companion on amusement when watching funny films. Motivation and Emotion 39, 3 (2015), 434--447.
    [23]
    T. Huber and W. Ruch. 2007. Laughter as a uniform category? A historic analysis of different types of laughter. In Proceedings of the 10th Congress of the Swiss Society of Psychology.
    [24]
    L. W. Hughes and J. B. Avey. 2009. Transforming with levity: Humor, leadership, and follower attitudes. Leadership 8 Organization Development Journal 30, 6 (2009), 540--562.
    [25]
    D. Huron. 2004. Music-engendered laughter: An analysis of humor devices in PDQ Bach. In Proceedings of International Conference on Music Perception and Cognition. 700--704.
    [26]
    Dana Kovarsky, Maura Curran, and Nicole Zobel Nichols. 2009. Laughter and communicative engagement in interaction. In Seminars in Speech and Language, Vol. 30. © Thieme Medical Publishers, 027--036.
    [27]
    M. Lombard, T. B. Ditton, and L. Weinstein. 2009. Measuring presence: The temple presence inventory. In Proceedings of the 12th Annual International Workshop on Presence. 1--15.
    [28]
    M. Mancini, G. Varni, D. Glowinski, and G. Volpe. 2012. Computing and evaluating the body laughter index. In Human Behavior Understanding, A. A. Salah, J. Ruiz-del Solar, C. Meriçli, and P.-Y. Oudeyer (Eds.). Lecture Notes in Computer Science, Vol. 7559. Springer, Berlin, 90--98.
    [29]
    V. Markaki, S. Merlino, L. Mondada, and F. Oloff. 2010. Laughter in professional meetings: The organization of an emergent ethnic joke. Journal of Pragmatics 42, 6 (2010), 1526--1542.
    [30]
    S. Marsella, J. Gratch, and P. Petta. 2010. Computational models of emotion. A Blueprint for Affective Computing—A Sourcebook and Manual. 21--46.
    [31]
    W. S. McCulloch and W. Pitts. 1943. A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics 5, 4 (1943), 115--133.
    [32]
    G. McKeown, W. Curran, C. McLoughlin, H. J. Griffin, and N. Bianchi-Berthouze. 2013. Laughter induction techniques suitable for generating motion capture data of laughter associated body movements. In Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on. IEEE, 1--5.
    [33]
    C. Nagaoka, M. Komori, and S. Yoshikawa. 2007. Embodied Synchrony in Conversation. 331--351.
    [34]
    R. Niewiadomski, J. Hofmann, J. Urbain, T. Platt, J. Wagner, B. Piot, H. Cakmak, S. Pammi, T. Baur, S. Dupont, and others. 2013. Laugh-aware virtual agent and its impact on user amusement. In Proceedings of the 2013 International Conference on Autonomous Agents and Multi-agent Systems. 619--626.
    [35]
    M. J. Owren and J.-A. Bachorowski. 2003. Reconsidering the evolution of nonlinguistic communication: The case of laughter. Journal of Nonverbal Behavior 27 (2003), 183--200. Issue 3.
    [36]
    I. S. Pandzic and R. Forcheimer (Eds.). 2002. MPEG4 Facial Animation - The Standard, Implementations and Applications. John Wiley 8 Sons.
    [37]
    F. Pecune, A. Cafaro, M. Chollet, P. Philippe, and C. Pelachaud. 2014. Suggestions for extending SAIBA with the VIB platform. In Proceedings of the Workshop on Architectures and Standards for Intelligent Virtual Agents at IVA. 336--342.
    [38]
    F. Pecune, M. Mancini, B. Biancardi, G. Varni, Y. Ding, C. Pelachaud, G. Volpe, and A. Camurri. 2015. Laughing with a virtual agent. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems (AAMAS’15). 1817--1818.
    [39]
    S. Piana, M. Mancini, A. Camurri, G. Varni, and G. Volpe. 2013. Automated analysis of non-verbal expressive gesture. In Human Aspects in Ambient Intelligence, T. Bosse, D. J. Cook, M. Neerincx, and F. Sadri (Eds.). Vol. 8. Atlantis Press, 41--54.
    [40]
    T. Platt, J. Hofmann, W. Ruch, R. Niewiadomski, and J. Urbain. 2012. Experimental standards in research on AI and humor when considering psychology. In Proceedings of the AAAI Fall Symposium: Artificial Intelligence of Humor.
    [41]
    R. Prada and A. Paiva. 2009. Teaming up humans with autonomous synthetic characters. Artificial Intelligence 173, 1 (2009), 80--103.
    [42]
    K. Prepin, M. Ochs, and C. Pelachaud. 2013. Beyond backchannels: Co-construction of dyadic stance by reciprocal reinforcement of smiles between virtual agents. In Proceedings of COGSCI 2013 the Annual Meeting of the Cognitive Science Society.
    [43]
    R. R. Provine. 2001. Laughter: A Scientific Investigation. Penguin.
    [44]
    W. Ruch and P. Ekman. 2001. The expressive pattern of laughter. In Emotion, Qualia and Consciousness, A. Kaszniak (Ed.). World Scientific Publishers, Tokyo, 426--443.
    [45]
    W. Ruch and S. Rath. 1993. The nature of humor appreciation: Toward an integration of perception of stimulus properties and affective experience. Humor 6 (1993), 363--363.
    [46]
    J. Urbain, E. Bevacqua, T. Dutoit, A. Moinet, R. Niewiadomski, C. Pelachaud, B. Picart, J. Tilmanne, and J. Wagner. 2010. The AVLaughterCycle database. In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC’10). 19--21.
    [47]
    J. Urbain, H. Çakmak, and T. Dutoit. 2013. Automatic phonetic transcription of laughter and its application to laughter synthesis. In Proceedings of ACII. 153--158.
    [48]
    J. Urbain, R. Niewiadomski, J. Hofmann, E. Bantegnie, T. Baur, N. Berthouze, H. Cakmak, R. Cruz, S. Dupont, M. Geist, H. Griffin, F. Lingenfelser, M. Mancini, M. Miranda, G. Mckeown, S. Pammi, O. Pietquin, B. Piot, T. Platt, W. Ruch, A. Sharma, G. Volpe, and J. Wagner. 2012. Laugh machine. In Proceedings of the 8th International Summer Workshop on Multimodal Interfaces, eNTERFACE 12. 13--34.
    [49]
    D. A. Winter. 1990. Biomechanics and Motor Control of Human Movement. John Wiley 8 Sons, Inc., Toronto.
    [50]
    R. Zhao, A. Papangelis, and J. Cassell. 2014. Towards a dyadic computational model of rapport management for human-virtual agent interaction. In Intelligent Virtual Agents. Springer, 514--527.

    Cited By

    View all
    • (2023)Computational research and the case for taking humor seriouslyHUMOR10.1515/humor-2023-002136:2(207-223)Online publication date: 17-Mar-2023
    • (2023)Towards investigating gaze and laughter coordination in socially interactive agentsProceedings of the 11th International Conference on Human-Agent Interaction10.1145/3623809.3623968(473-475)Online publication date: 4-Dec-2023
    • (2023)Data-driven Communicative Behaviour Generation: A SurveyACM Transactions on Human-Robot Interaction10.1145/360923513:1(1-39)Online publication date: 16-Aug-2023
    • Show More Cited By

    Index Terms

    1. Implementing and Evaluating a Laughing Virtual Character

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Internet Technology
      ACM Transactions on Internet Technology  Volume 17, Issue 1
      Special Issue on Affect and Interaction in Agent-based Systems and Social Media and Regular Paper
      February 2017
      213 pages
      ISSN:1533-5399
      EISSN:1557-6051
      DOI:10.1145/3036639
      • Editor:
      • Munindar P. Singh
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 25 February 2017
      Accepted: 01 September 2016
      Revised: 01 September 2016
      Received: 01 December 2015
      Published in TOIT Volume 17, Issue 1

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. HCI
      2. evaluation
      3. laughter
      4. system
      5. virtual character

      Qualifiers

      • Research-article
      • Research
      • Refereed

      Funding Sources

      • European Union's Horizon 2020 research and innovation programme
      • French National Research Agency projects MOCA and IMPRESSIONS
      • European Union's 7th Framework Programme
      • Labex SMART, French state funds managed
      • ANR within the Investissements d'Avenir programme

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)25
      • Downloads (Last 6 weeks)2
      Reflects downloads up to 09 Aug 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)Computational research and the case for taking humor seriouslyHUMOR10.1515/humor-2023-002136:2(207-223)Online publication date: 17-Mar-2023
      • (2023)Towards investigating gaze and laughter coordination in socially interactive agentsProceedings of the 11th International Conference on Human-Agent Interaction10.1145/3623809.3623968(473-475)Online publication date: 4-Dec-2023
      • (2023)Data-driven Communicative Behaviour Generation: A SurveyACM Transactions on Human-Robot Interaction10.1145/360923513:1(1-39)Online publication date: 16-Aug-2023
      • (2023)A Music-Driven Deep Generative Adversarial Model for Guzheng Playing AnimationIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2021.311590229:2(1400-1414)Online publication date: 1-Feb-2023
      • (2023)Die Verwischung der Grenzen zwischen Fiktion und Realität: Der Einsatz virtueller Models in der MarkenkommunikationMarketing und Innovation in disruptiven Zeiten10.1007/978-3-658-38572-9_11(279-301)Online publication date: 21-Jan-2023
      • (2022)Social robots as eating companionsFrontiers in Computer Science10.3389/fcomp.2022.9098444Online publication date: 31-Aug-2022
      • (2021)BibliographieImiter pour grandir10.3917/dunod.nadel.2021.01.0226(226-241)Online publication date: 3-Mar-2021
      • (2021)Laughter and smiling facial expression modelling for the generation of virtual affective behaviorPLOS ONE10.1371/journal.pone.025105716:5(e0251057)Online publication date: 12-May-2021
      • (2021)Facial expression generation of 3D avatar based on semantic analysis2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)10.1109/RO-MAN50785.2021.9515463(89-94)Online publication date: 8-Aug-2021
      • (2020)Room for one more? - Introducing Artificial Commensal CompanionsExtended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems10.1145/3334480.3383027(1-8)Online publication date: 25-Apr-2020
      • Show More Cited By

      View Options

      Get Access

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media