Abstract
We extend and develop an existing virtual agent system to generate communicative gestures for different embodiments (i.e. virtual or physical agents). This paper presents our ongoing work on an implementation of this system for the NAO humanoid robot. From a specification of multi-modal behaviors encoded with the behavior markup language, BML, the system synchronizes and realizes the verbal and nonverbal behaviors on the robot.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Calbris, G.: Contribution à une analyse sémiologique de la mimique faciale et gestuelle française dans ses rapports avec la communication verbale. Ph.D. thesis (1983)
Gouaillier, D., Hugel, V., Blazevic, P., Kilner, C., Monceaux, J., Lafourcade, P., Marnier, B., Serre, J., Maisonnier, B.: Mechatronic design of NAO humanoid. In: Robotics and Automation, ICRA 2009, pp. 769–774. IEEE Press (2009)
Hartmann, B., Mancini, M., Pelachaud, C.: Implementing Expressive Gesture Synthesis for Embodied Conversational Agents. In: Gibet, S., Courty, N., Kamp, J.-F. (eds.) GW 2005. LNCS (LNAI), vol. 3881, pp. 188–199. Springer, Heidelberg (2006)
Kendon, A.: Gesture: Visible action as utterance. Cambridge University Press (2004)
Kipp, M., Neff, M., Albrecht, I.: An annotation scheme for conversational gestures: How to economically capture timing and form. Language Resources and Evaluation 41(3), 325–339 (2007)
Kopp, S., Krenn, B., Marsella, S., Marshall, A.N., Pelachaud, C., Pirker, H., Thórisson, K.R., Vilhjálmsson, H.: Towards a Common Framework for Multimodal Generation: The Behavior Markup Language. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 205–217. Springer, Heidelberg (2006)
Kushida, K., Nishimura, Y., Dohi, H., Ishizuka, M., Takeuchi, J., Tsujino, H.: Humanoid robot presentation through multimodal presentation markup language mpml-hr. In: AAMAS 2005 Workshop on Creating Bonds with Humanoids. IEEE Press (2005)
Martin, J.C.: The contact video corpus (2009)
McNeill, D.: Hand and mind: What gestures reveal about thought. University of Chicago Press (1992)
Ng-Thow-Hing, V., Luo, P., Okita, S.: Synchronized gesture and speech production for humanoid robots. In: Intelligent Robots and Systems (IROS), pp. 4617–4624. IEEE Press (2010)
Pelachaud, C.: Modelling multimodal expression of emotion in a virtual agent. Philosophical Transactions of the Royal Society B: Biological Sciences 364(1535), 3539 (2009)
Prillwitz, S., Leven, R., Zienert, H., Hanke, T., Henning, J., et al.: HamNoSys Version 2.0: Hamburg notation system for sign languages: An Introductory Guide, vol. 5. University of Hamburg (1989)
Rich, C., Ponsleur, B., Holroyd, A., Sidner, C.: Recognizing engagement in human-robot interaction. In: Proceeding of the 5th ACM/IEEE International Conference on Human-robot Interaction, pp. 375–382. ACM Press (2010)
Salem, M., Kopp, S., Wachsmuth, I., Joublin, F.: Generating robot gesture using a virtual agent framework. In: Intelligent Robots and Systems (IROS), pp. 3592–3597. IEEE Press (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Le, Q.A., Pelachaud, C. (2012). Generating Co-speech Gestures for the Humanoid Robot NAO through BML. In: Efthimiou, E., Kouroupetroglou, G., Fotinea, SE. (eds) Gesture and Sign Language in Human-Computer Interaction and Embodied Communication. GW 2011. Lecture Notes in Computer Science(), vol 7206. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34182-3_21
Download citation
DOI: https://doi.org/10.1007/978-3-642-34182-3_21
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-34181-6
Online ISBN: 978-3-642-34182-3
eBook Packages: Computer ScienceComputer Science (R0)