Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2663204.2667576acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
extended-abstract

Authoring Communicative Behaviors for Situated, Embodied Characters

Published: 12 November 2014 Publication History

Abstract

Embodied conversational agents hold great potential as multimodal interfaces due to their ability to communicate naturally using speech and nonverbal cues. The goal of my research is to enable animators and designers to endow ECAs with interactive behaviors that are controllable, communicatively effective, as well as natural and aesthetically appealing; I focus in particular on spatially situated, communicative nonverbal behaviors such as gaze and deictic gestures. This goal requires addressing challenges in the space of animation authoring and editing, parametric control, behavior coordination and planning, and retargeting to different embodiment designs. My research will aim to provide animators and designers with techniques and tools needed to author natural, expressive, and controllable gaze and gesture movements that leverage empirical or learned models of human behavior, to apply such behaviors to characters with different designs and communicative styles, and to develop techniques and models for planning of coordinated behaviors that economically and correctly convey the range of diverse cues required for multimodal, user-machine interaction.

References

[1]
Andrist, S., Pejsa, T., Mutlu, B., and Gleicher, M. 2012. Designing effective gaze mechanisms for virtual agents, Proc. CHI 2012, 705--714.
[2]
Bohus, D., and Horvitz, E. 2009. Dialog in the Open World: Platform and Applications. Proc. of ICMI 2009, 31--38.
[3]
Cafaro, A., Gaito, R., and Vilhjalmsson, H. 2009. Animating Idle Gaze in Public Places. Proc. IVA 2009, 250--256.
[4]
Cassell, J. 2000. More Than Just Another Pretty Face: Embodied Conversational Interface Agents, Communications of the ACM, 43, 70--78.
[5]
Cassell, J., Torres, O.E., and Prevost, S. 1999. Turn Taking versus Discourse Structure. Machine Conversations, 511, 143--153.
[6]
Frischen, A., Bayliss, A. P., and Tipper, S. P. 2007. Gaze Cueing of Attention: Visual Attention, Social Cognition, and Individual Differences. Psychological Bulletin, 133(4).
[7]
Kopp, S., Krenn, B., Marsella, S., Marshall, A., Pelachaud, C., Pirker, H., Thórisson, K., and Vilhjalmsson, H. 2006. Towards a Common Framework for Multimodal Generation in ECAs: The Behavior Markup Language, Proc. IVA 2006, 205--217.
[8]
Lee, S. P., Badler, J. B., and Badler, N. I. 2002. Eyes Alive. ACM Transactions on Graphics, 21, 637--644.
[9]
Neff, M., Kipp, M., Albrecht, I., and Seidel, H.-P. 2008. Gesture Modeling and Animation Based on a Probabilistic Re-creation of Speaker Style, ACM Transactions on Graphics, 27(1), 5:1--5:24.
[10]
Pejsa, T., Andrist, S., Gleicher, M., and Mutlu, B. 2014. Gaze and Attention Management for Embodied Conversational Agents, under review.
[11]
Pejsa, T., Mutlu, B., and Gleicher, M. 2013. Stylized and Performative Gaze for Character Animation. Computer Graphics Forum, 32(2), 143--152.
[12]
Pelachaud, C., and Bilvi, M. 2003. Modelling Gaze Behavior for Conversational Agents. Proc. IVA 2003, 93--100.
[13]
Poggi, I., Pelachaud, C., and De Rosis, F. 2000. Eye Communication in a Conversational 3D Synthetic Agent. AI Communications, 13(3), 169--181.
[14]
Thiebaux, M., Marsella, S., Marshall, A. N., and Kallmann, M. 2008. SmartBody: Behavior Realization for Embodied Conversational Agents. Proc. AAMAS 2008, 151--158.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '14: Proceedings of the 16th International Conference on Multimodal Interaction
November 2014
558 pages
ISBN:9781450328852
DOI:10.1145/2663204
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 November 2014

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. animation
  2. behavior coordination
  3. behavior planning
  4. embodied conversational agents
  5. gaze
  6. gestures
  7. multimodal interfaces
  8. nonverbal communication
  9. situated interaction

Qualifiers

  • Extended-abstract

Funding Sources

Conference

ICMI '14
Sponsor:

Acceptance Rates

ICMI '14 Paper Acceptance Rate 51 of 127 submissions, 40%;
Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 129
    Total Downloads
  • Downloads (Last 12 months)3
  • Downloads (Last 6 weeks)0
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media