Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.5555/2615731.2615856acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaamasConference Proceedingsconference-collections
research-article

Laughter animation synthesis

Published: 05 May 2014 Publication History
  • Get Citation Alerts
  • Abstract

    Laughter is an important communicative signal in human-human communication. However, very few attempts have been made to model laughter animation synthesis for virtual characters. This paper reports our work to model hilarious laughter. We have developed a generator for face and body motions that takes as input the sequence of pseudo-phonemes of laughter and each pseudo-phoneme's duration time. Lip and jaw movements are further driven by laughter prosodic features. The proposed generator first learns the relationship between input signals (pseudo-phoneme and acoustic features) and human motions; then the learnt generator can be used to produce automatically laughter animation in real time. Lip and jaw motion synthesis is based on an extension of Gaussian Models, the contextual Gaussian Model. Head and eyebrow motion synthesis is based on selecting and concatenating motion segments from motion capture data of human laughter while torso and shoulder movements are driven from head motion by a PD controller. Our multimodal behaviors generator of laughter has been evaluated through perceptive study involving the interaction of a human and an agent telling jokes to each other.

    References

    [1]
    V. Adelsward. Laughter and dialogue: The social significance of laughter in institutional discourse. Nordic Journal of Linguistics, 102(12):107--136, 1989.
    [2]
    P. Boersma and D. Weeninck. Praat, a system for doing phonetics by computer. Glot International, 5(9/10):341--345, 2001.
    [3]
    C. Busso, Z. Deng, U. Neumann, and S. Narayanan. Natural head motion synthesis driven by acoustic prosodic features. Journal of Visualization and Computer Animation, 16(3--4):283--290, 2005.
    [4]
    D. Cosker and J. Edge. Laughing, crying, sneezing and yawning: Automatic voice driven animation of non-speech articulations. Proceedings of Computer Animation and Social Agents, pages 21--24, 2009.
    [5]
    P. C. DiLorenzo, V. B. Zordan, and B. L. Sanders. Laughing out loud: control for modeling anatomically inspired laughter using audio. ACM Trans. Graph., 27(5):125, 2008.
    [6]
    Y. Ding, C. Pelachaud, and T. Artières. Modeling multimodal behaviors from speech prosody. In IVA, pages 217--228. 2013.
    [7]
    Y. Ding, M. Radenen, T. Artières, and C. Pelachaud. Speech-driven eyebrow motion synthesis with contextual markovian models. In ICASSP, pages 3756--3760, 2013.
    [8]
    S. Mariooryad and C. Busso. Generating human-like behaviors using joint, speech-driven models for conversational agents. IEEE Trans. on Audio, Speech & Language Processing, 20(8):2329--2340, 2012.
    [9]
    R. Martin. Is laughter the best medicine? humor, laughter, and physical health. Current Directions in Psychological Science, 11(6):216--220, 2002.
    [10]
    M. Neff and E. Fiume. Modeling tension and relaxation for computer animation. In Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation, SCA '02.
    [11]
    R. Niewiadomski, J. Hofmann, J. Urbain, T. Platt, J. Wagner, B. PIOT, H. Cakmak, S. Pammi, T. Baur, S. Dupont, M. Geist, F. Lingenfelser, G. McKeown, O. Pietquin, and W. Ruch. Laugh-aware virtual agent and its impact on user amusement . In AAMAS, pages 619--626, 2013.
    [12]
    R. Niewiadomski and C. Pelachaud. Towards multimodal expression of laughter. In IVA, pages 231--244, 2012.
    [13]
    R. Niewiadomski, J. Urbain, C. Pelachaud, and T. Dutoit. Finding out the audio and visual features that influence the perception of laughter intensity and differ in inhalation and exhalation phases. In International Workshop on Corpora for Research on EMOTION SENTIMENT and SOCIAL SIGNALS, LREC 2012.
    [14]
    M. Ochs and C. Pelachaud. Model of the perception of smiling virtual character. In AAMAS, pages 87--94, 2012.
    [15]
    I. Pandzic and R. Forcheimer. MPEG4 Facial Animation - The standard, implementations and applications. John Wiley & Sons, 2002.
    [16]
    K. Perlin. Improving noise. In Proceedings of the 29th annual conference on Computer graphics and interactive techniques, SIGGRAPH '02, pages 681--682.
    [17]
    R. Provine. Laughter. American Scientist, 84(1):38--47, 1996.
    [18]
    W. Ruch and P. Ekman. The Expressive Pattern of Laughter. Emotion qualia, and consciousness, pages 426--443, 2001.
    [19]
    J. Saragih, S. Lucey, and J. Cohn. Deformable model fitting by regularized landmark mean-shift. International Journal of Computer Vision, 91(2):200--215, 2011.
    [20]
    K. Tokuda, T. Yoshimura, T. Masuko, T. Kobayashi, and T. Kitamura. Speech parameter generation algorithms for hmm-based speech synthesis. In ICASSP, pages 1315--1318, 2000.
    [21]
    J. Urbain, E. Bevacqua, T. Dutoit, A. Moinet, R. Niewiadomski, C. Pelachaud, B. Picart, J. Tilmanne, and J. Wagner. The avlaughtercycle database. In LREC, 2010.
    [22]
    J. Urbain, H. Çakmak, and T. Dutoit. Automatic phonetic transcription of laughter and its application to laughter synthesis. In biannual Humaine Association Conference on Affective Computing and Intelligent Interaction, pages 153--158, 2013.

    Cited By

    View all
    • (2023)Data-Driven Communicative Behaviour Generation: A SurveyACM Transactions on Human-Robot Interaction10.1145/3609235Online publication date: 16-Aug-2023
    • (2021)Multimodal Behavior Modeling for Socially Interactive AgentsThe Handbook on Socially Interactive Agents10.1145/3477322.3477331(259-310)Online publication date: 10-Sep-2021
    • (2019)Nonverbal behavior in multimodal performancesThe Handbook of Multimodal-Multisensor Interfaces10.1145/3233795.3233803(219-262)Online publication date: 1-Jul-2019
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    AAMAS '14: Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems
    May 2014
    1774 pages
    ISBN:9781450327381

    Sponsors

    • IFAAMAS

    In-Cooperation

    Publisher

    International Foundation for Autonomous Agents and Multiagent Systems

    Richland, SC

    Publication History

    Published: 05 May 2014

    Check for updates

    Author Tags

    1. expression synthesis
    2. laughter
    3. multimodal animation
    4. virtual agent

    Qualifiers

    • Research-article

    Conference

    AAMAS '14
    Sponsor:

    Acceptance Rates

    AAMAS '14 Paper Acceptance Rate 169 of 709 submissions, 24%;
    Overall Acceptance Rate 1,155 of 5,036 submissions, 23%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)2
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 09 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Data-Driven Communicative Behaviour Generation: A SurveyACM Transactions on Human-Robot Interaction10.1145/3609235Online publication date: 16-Aug-2023
    • (2021)Multimodal Behavior Modeling for Socially Interactive AgentsThe Handbook on Socially Interactive Agents10.1145/3477322.3477331(259-310)Online publication date: 10-Sep-2021
    • (2019)Nonverbal behavior in multimodal performancesThe Handbook of Multimodal-Multisensor Interfaces10.1145/3233795.3233803(219-262)Online publication date: 1-Jul-2019
    • (2018)HMM-based generation of laughter facial expressionSpeech Communication10.1016/j.specom.2017.12.00698:C(28-41)Online publication date: 1-Apr-2018
    • (2017)Implementing and Evaluating a Laughing Virtual CharacterACM Transactions on Internet Technology10.1145/299857117:1(1-22)Online publication date: 25-Feb-2017
    • (2016)Towards a listening agent: a system generating audiovisual laughs and smiles to show interestProceedings of the 18th ACM International Conference on Multimodal Interaction10.1145/2993148.2993182(248-255)Online publication date: 31-Oct-2016
    • (2016)Learning Activity Patterns Performed With EmotionProceedings of the 3rd International Symposium on Movement and Computing10.1145/2948910.2948958(1-4)Online publication date: 5-Jul-2016
    • (2015)Laughing with a Virtual AgentProceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems10.5555/2772879.2773452(1817-1818)Online publication date: 4-May-2015
    • (2014)Rhythmic Body Movements of LaughterProceedings of the 16th International Conference on Multimodal Interaction10.1145/2663204.2663240(299-306)Online publication date: 12-Nov-2014

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media