Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2663204.2663240acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Rhythmic Body Movements of Laughter

Published: 12 November 2014 Publication History

Abstract

In this paper we focus on three aspects of multimodal expressions of laughter. First, we propose a procedural method to synthesize rhythmic body movements of laughter based on spectral analysis of laughter episodes. For this purpose, we analyze laughter body motions from motion capture data and we reconstruct them with appropriate harmonics. Then we reduce the parameter space to two dimensions. These are the inputs of the actual model to generate a continuum of laughs rhythmic body movements.
In the paper, we also propose a method to integrate rhythmic body movements generated by our model with other synthetized expressive cues of laughter such as facial expressions and additional body movements. Finally, we present a real-time human-virtual character interaction scenario where virtual character applies our model to answer to human's laugh in real-time. \

References

[1]
J. N. Bailenson and N. Yee. Digital chameleons: Automatic assimilation of nonverbal gestures in immersive virtual environments. Psychological Science, 16(10):814--819, 2005.
[2]
S. Block, M. Lemeignan, and N. Aguilera. Specific respiratory patterns distinguish among human basic emotions. International Journal of Psychophysiology, 11(2):141--154, 1991.
[3]
P. Boersma and D. Weeninck. PRAAT, a system for doing phonetics by computer. Glot International, 5(9/10):341--345, 2001.
[4]
G. Castellano, M. Mancini, C. Peters, and P. McOwan. Expressive copying behavior for social agents: A perceptual analysis. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, 42(3):776--783, May 2012.
[5]
D. Cosker and J. Edge. Laughing, crying, sneezing and yawning: Automatic voice driven animation of non-speech articulations. In Proc. of Computer Animation and Social Agents, pages 21--24, 2009.
[6]
C. Darwin. The expression of emotion in man and animal. John Murray, London, 1872.
[7]
P. C. DiLorenzo, V. B. Zordan, and B. L. Sanders. Laughing out loud: control for modeling anatomically inspired laughter using audio. ACM Transactions on Graphics (TOG), 27(5):125, 2008.
[8]
Y. Ding, K. Prepin, J. Huang, C. Pelachaud, and T. Arti'eres. Laughter animation synthesis. In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems, AAMAS '14, pages 773--780, 2014.
[9]
S. Fukushima, Y. Hashimoto, T. Nozawa, and H. Kajimoto. Laugh enhancer using laugh track synchronized with the user's laugh motion. In CHI '10 Extended Abstracts on Human Factors in Computing Systems, CHI EA '10, pages 3613--3618, New York, NY, USA, 2010. ACM.
[10]
D. Lakens and M. Stel. If they move in sync, they must feel in sync: Movement synchrony leads to attributions of rapport and entitativity. Social Cognition, 29(1):1--14, 2011.
[11]
M. Mancini, J. Hofmann, T. Platt, G. Volpe, G. Varni, D. Glowinski, W. Ruch, and A. Camurri. Towards automated full body detection of laughter driven by human expert annotation. In Proceedings of Affective Interaction in Natural Environments (AFFINE) Workshop, pages 757--762, Geneva, Switzerland, 2013.
[12]
M. Mancini, G. Varni, D. Glowinski, and G. Volpe. Computing and evaluating the body laughter index. Human Behavior Understanding, pages 90--98, 2012.
[13]
R. Niewiadomski, E. Bevacqua, Q. A. Le, M. Obaid, J. Looser, and C. Pelachaud. Cross-media agent platform. In Web3D ACM Conference, pages 11--19, Paris, France, 2011.
[14]
R. Niewiadomski, J. Hofmann, J. Urbain, T. Platt, J. Wagner, B. Piot, H. Cakmak, S. Pammi, T. Baur, S. Dupont, M. Geist, F. Lingenfelser, G. McKeown, O. Pietquin, and W. Ruch. Laugh-aware virtual agent and its impact on user amusement. In Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems, AAMAS '13, pages 619--626, 2013.
[15]
R. Niewiadomski, M. Mancini, T. Baur, G. Varni, H. Griffin, and M. Aung. MMLI: Multimodal multiperson corpus of laughter in interaction. In A. Salah, H. Hung, O. Aran, and H. Gunes, editors, Human Behavior Understanding, volume 8212 of LNCS, pages 184--195. Springer, 2013.
[16]
R. Niewiadomski, M. Mancini, and S. Piana. Human and virtual agent expressive gesture quality analysis and synthesis. In M. Rojc and N. Campbell, editors, Coverbal Synchrony in Human-Machine Interaction, pages 269--292. CRC Press, 2013.
[17]
R. Niewiadomski and C. Pelachaud. Towards multimodal expression of laughter. In Proceedings of IVA'12, pages 231--244. Springer-Verlag Berlin, Heidelberg, 2012.
[18]
K. Prepin and C. Pelachaud. Effect of time delays on agents' interaction dynamics. In The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3, AAMAS '11, pages 1055--1062, 2011.
[19]
R. R. Provine and Y. L. Yong. Laughter: A stereotyped human vocalization. Ethology, 89(2):115--124, 1991.
[20]
R. Pugliese and K. Lehtonen. A framework for motion based bodily enaction with virtual characters. In Intelligent Virtual Agents, volume 6895 of LNCS, pages 162--168. Springer Berlin Heidelberg, 2011.
[21]
L. D. Riek, P. C. Paul, and P. Robinson. When my robot smiles at me: Enabling human-robot rapport via real-time head gesture mimicry. Journal on Multimodal User Interfaces, 3(1--2):99--108, 2010.
[22]
W. Ruch and P. Ekman. The expressive pattern of laughter. In A. Kaszniak, editor, Emotion, qualia and consciousness, pages 426--443. World Scientific Publishers, Tokyo, 2001.
[23]
J. Tilmanne and T. Dutoit. Stylistic walk synthesis based on Fourier decomposition. In Proceedings of the INTETAIN, Mons, Belgium, 2013.
[24]
N. Troje. The little difference: Fourier based synthesis of gender-specific biological motion. In R. Würtz and M. Lappe, editors, Dynamic perception, pages 115--120. Berlin: AKA Verlag, 2002.
[25]
M. Unuma, K. Anjyo, and R. Takeuchi. Fourier principles for emotion-based human figure animation. In Proceedings of SIGGRAPH 1995, pages 91--96, 1995.
[26]
J. Urbain, H. Cakmak, A. Charlier, M. Denti, T. Dutoit, and S. Dupont. Arousal-driven synthesis of laughter. Selected Topics in Signal Processing, IEEE Journal of, 8(2):273--284, April 2014.
[27]
J. Urbain, H. Cakmak, and T. Dutoit. Automatic phonetic transcription of laughter and its application to laughter synthesis. In Proceedings of ACII '13, pages 153--158, 2013.
[28]
J. Urbain, R. Niewiadomski, E. Bevacqua, T. Dutoit, A. Moinet, C. Pelachaud, B. Picart, J. Tilmanne, and J. Wagner. AVLaughterCycle: Enabling a virtual agent to join in laughing with a conversational partner using a similarity-driven audiovisual laughter animation. Journal on Multimodal User Interfaces, 4(1):47--58, 2010. Special Issue: eNTERFACE'09.

Cited By

View all
  • (2023)Rhythm Research in Interactive System Design: A Literature ReviewInternational Journal of Human–Computer Interaction10.1080/10447318.2023.2294628(1-20)Online publication date: 27-Dec-2023
  • (2020)Low-Level Characterization of Expressive Head Motion Through Frequency Domain AnalysisIEEE Transactions on Affective Computing10.1109/TAFFC.2018.280589211:3(405-418)Online publication date: 1-Jul-2020
  • (2019)Motion Generation during Vocalized Emotional Expressions and Evaluation in Android RobotsFuture of Robotics - Becoming Human with Humanoid or Emotional Intelligence [Working Title]10.5772/intechopen.88457Online publication date: 19-Aug-2019
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '14: Proceedings of the 16th International Conference on Multimodal Interaction
November 2014
558 pages
ISBN:9781450328852
DOI:10.1145/2663204
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 November 2014

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. laughter
  2. nonverbal behaviors
  3. realtime interaction
  4. virtual character

Qualifiers

  • Research-article

Funding Sources

Conference

ICMI '14
Sponsor:

Acceptance Rates

ICMI '14 Paper Acceptance Rate 51 of 127 submissions, 40%;
Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)0
Reflects downloads up to 15 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Rhythm Research in Interactive System Design: A Literature ReviewInternational Journal of Human–Computer Interaction10.1080/10447318.2023.2294628(1-20)Online publication date: 27-Dec-2023
  • (2020)Low-Level Characterization of Expressive Head Motion Through Frequency Domain AnalysisIEEE Transactions on Affective Computing10.1109/TAFFC.2018.280589211:3(405-418)Online publication date: 1-Jul-2020
  • (2019)Motion Generation during Vocalized Emotional Expressions and Evaluation in Android RobotsFuture of Robotics - Becoming Human with Humanoid or Emotional Intelligence [Working Title]10.5772/intechopen.88457Online publication date: 19-Aug-2019
  • (2019)Analysis and generation of laughter motions, and evaluation in an android robotAPSIPA Transactions on Signal and Information Processing10.1017/ATSIP.2018.328Online publication date: 25-Jan-2019
  • (2019)The role of respiration audio in multimodal analysis of movement qualitiesJournal on Multimodal User Interfaces10.1007/s12193-019-00302-114:1(1-15)Online publication date: 11-Apr-2019
  • (2019)Music Valence and Genre Influence Group CreativityEngineering Psychology and Cognitive Ergonomics10.1007/978-3-030-22507-0_32(410-422)Online publication date: 26-Jul-2019
  • (2018)Laughter Animation GenerationHandbook of Human Motion10.1007/978-3-319-14418-4_190(2213-2229)Online publication date: 5-Apr-2018
  • (2017)Audio-Driven Laughter Behavior ControllerIEEE Transactions on Affective Computing10.1109/TAFFC.2017.27543658:4(546-558)Online publication date: 1-Oct-2017
  • (2017)Increased Discriminability of Authenticity from Multimodal Laughter is Driven by Auditory InformationQuarterly Journal of Experimental Psychology10.1080/17470218.2016.122637070:10(2159-2168)Online publication date: 1-Oct-2017
  • (2017)Laughter Animation GenerationHandbook of Human Motion10.1007/978-3-319-30808-1_190-1(1-16)Online publication date: 19-Apr-2017
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media