Abstract
Gesturing behavior is subject to great variations across situations, individuals, or cultures. These variations make gestures hard for systematic studies and modeling attempts. However, gesture research on real humans and modeling approaches with virtual agents have made significant progress in the last years. In this chapter we discuss the state of research and present results from an extensive empirical study on human iconic gestures in direction giving dialogues. It is described how machine learning methods can be employed to extract different speakers’ gesturing style and to generate individualized language and gestures in ECAs. Evaluations show that human observers rate virtual agents better in terms of competence, human-likeness, or likability when a consistent individual gesture style is produced.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Ball, G., Breese, J.: Emotion and personality in a conversational agent. In: Cassell, J., Sullivan, J., Prevost, S., Churchill, E. (eds.) Embodied Conversational Agents, pp. 189–219. MIT Press, Cambridge (2000)
Bavelas, J., Gerwing, J., Sutton, C., Prevost, D.: Gesturing on the telephone: Independent effects of dialogue and visibility. Journal of Memory and Language 58, 495–520 (2008)
Bavelas, J., Kenwood, C., Johnson, T., Philips, B.: An experimental study of when and how speakers use gestures to communicate. Gesture 2(1), 1–17 (2002)
Bente, G., Haug Leuschner, A.I., Blascovich, J.: The others: Universals and cultural specificities in the perception of status and dominance from nonverbal behavior. Consciousness and Cognition 19(3), 762–777 (2010)
Bergmann, K., Kopp, S.: GNetIc – using bayesian decision networks for iconic gesture generation. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds.) IVA 2009. LNCS, vol. 5773, pp. 76–89. Springer, Heidelberg (2009)
Bergmann, K., Kopp, S.: Increasing expressiveness for virtual agents–Autonomous generation of speech and gesture in spatail description tasks. In: Decker, K., Sichman, J., Sierra, C., Castelfranchi, C. (eds.) Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems, IFAAMAS, Budapest, Hungary, pp. 361–368 (2009)
Bergmann, K., Kopp, S.: Modelling the production of co-verbal iconic gestures by learning bayesian decision networks. Applied Artificial Intelligence 24(6), 530–551 (2010)
Bergmann, K., Kopp, S., Eyssel, F.: Individualized gesturing outperforms average gesturing – Evaluating gesture production in virtual humans. In: Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., Safonova, A. (eds.) Proceedings of the 10th Conference on Intelligent Virtual Agents, pp. 104–117. Springer, Berlin (2010)
Cassell, J., Stone, M., Douville, B., Prevost, S., Achorn, B., Steedman, M., Badler, N., Catherine, P.: Modeling the interaction between speech and gesture. In: Ram, A., Eiselt, K. (eds.) Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society, pp. 153–158. Lawrence Erlbaum Associates (1994)
Chi, D., Costa, M., Zhao, L., Badler, N.: The EMOTE model for effort and shape. In: Akeley, K. (ed.) Proceedings of SIGGRAPH 2000, pp. 173–182. Addison-Wesley Longman (2000)
Foster, M., Oberlander, J.: Corpus-based generation of head and eyebrow motion for an embodied conversational agent. Language Resources and Evaluation 41, 305–323 (2007)
Hartmann, B., Mancini, M., Pelachaud, C.: Implementing expressive gesture synthesis for embodied conversational agents. In: Gibet, S., Courty, N., Kamp, J.-F. (eds.) GW 2005. LNCS (LNAI), vol. 3881, pp. 188–199. Springer, Heidelberg (2006)
Hostetter, A., Alibali, M.: Raise your hand if you’re spatial–Relations between verbal and spatial skills and gesture production. Gesture 7(1), 73–95 (2007)
Howard, R., Matheson, J.: Influence diagrams. Decision Analysis 2(3), 127–143 (2005)
Kendon, A.: Gesture–Visible Action as Utterance. Cambridge University Press (2004)
Kimbara, I.: On gestural mimicry. Gesture 6(1), 39–61 (2006)
Kita, S.: How representational gestures helps speaking. In: McNeill, D. (ed.) Language and gesture, pp. 162–185. Cambridge University Press, Cambridge (2000)
Kopp, S.: Social resonance and embodied coordination in face-to-face conversation with artificial interlocutors. Speech Communication 52, 587–597 (2010)
Kopp, S., Bergmann, K., Wachsmuth, I.: Multimodal communication from multimodal thinking–Towards an integrated model of speech and gesture production. International Journal of Semantic Computing 2(1), 115–136 (2008)
Kopp, S., Tepper, P., Ferriman, K., Striegnitz, K., Cassell, J.: Trading spaces: How humans and humanoids use speech and gesture to give directions. In: Nishida, T. (ed.) Conversational Informatics, pp. 133–160. John Wiley, New York (2007)
Kopp, S., Wachsmuth, I.: Synthesizing multimodal utterances for conversational agents. Computer Animation and Virtual Worlds 15(1), 39–52 (2004)
Lauritzen, S.L.: The EM algorithm for graphical association models with missing data. Computational Statistics and Data Analysis 19, 191–201 (1995)
Madsen, A., Jensen, F., Kjærulff, U., Lang, M.: HUGIN–The tool for bayesian networks and influence diagrams. International Journal of Artificial Intelligence Tools 14(3), 507–543 (2005)
McNeill, D.: Gesture and Thought. Univ. of Chicago Press, Chicago (2005)
Melinger, A., Levelt, W.: Gesture and the communicative intention of the speaker. Gesture 4(2), 119–141 (2004)
Müller, C.: Redebegleitende Gesten: Kulturgeschichte–Theorie–Sprachvergleich. Berlin Verlag, Berlin (1998)
Nass, C., Isbister, K., Lee, E.-J.: Truth is beauty: Researching embodied conversational agents. In: Cassell, J., Sullivan, J., Prevost, S., Churchill, E. (eds.) Embodied Conversational Agents, pp. 374–402. MIT Press, Cambridge (2000)
Neff, M., Kipp, M., Albrecht, I., Seidel, H.-P.: Gesture modeling and animation based on a probabilistic re-creation of speaker style. ACM Transactions on Graphics 27(1), 1–24 (2008)
Ruttkay, Z.: Presenting in style by virtual humans. In: Esposito, A., Faundez-Zanuy, M., Keller, E., Marinaro, M. (eds.) COST Action 2102. LNCS (LNAI), vol. 4775, pp. 23–36. Springer, Heidelberg (2007)
Steck, H., Tresp, V.: Bayesian belief networks for data mining. In: Proceedings of the 2nd Workshop on Data Mining and Data Warehousing (1999)
Stone, M., DeCarlo, D., Oh, I., Rodriguez, C., Stere, A., Lees, A., Bregler, C.: Speaking with hands: Creating animated conversational characters from recordings of human performance. In: Proceedings of SIGGRAPH 2004, pp. 506–513 (2004)
Stone, M., Doran, C., Webber, B., Bleam, T., Palmer, M.: Microplanning with Communicative Intentions: The SPUD System. Comput. Intelligence 19(4), 311–381 (2003)
Streeck, J.: Depicting by gesture. Gesture 8(3), 285–301 (2008)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Kopp, S., Bergmann, K. (2012). Individualized Gesture Production in Embodied Conversational Agents. In: Zacarias, M., de Oliveira, J.V. (eds) Human-Computer Interaction: The Agency Perspective. Studies in Computational Intelligence, vol 396. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25691-2_12
Download citation
DOI: https://doi.org/10.1007/978-3-642-25691-2_12
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-25690-5
Online ISBN: 978-3-642-25691-2
eBook Packages: EngineeringEngineering (R0)