Abstract
Recent studies on non-verbal communication have put the need forward to provide virtual agents lifelike looks. In this paper, we present a system allowing to embody a conversational agent by modeling one of its perceptual behavior: gazing at a user. Our system takes as input images of the real scene from a webcam, and allows the virtual agent to look at the person it’s facing. Animation purposes are mainly explored in this paper through the description of our animation system. This system is composed of two types of models: muscle-based (for facial animation) and parametric (to control the gaze). Realism of the animation is also discussed.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Allbeck, J., Badler, N.: Toward representing agent behaviors modified by personality and emotion. In: AAMAS Workshop on Embodied Conversational Agents, Bologna, Italy (July 2002)
Argyle, M., Cook, M.: Gaze and mutual gaze. Cambridge University Press, London (1976)
Boulic, R., Mas, R., Thalmann, D.: A robust approach for the control of the center of mass with inverse kinetics. Computers and Graphics 20(5), 693–701 (1996)
Breton, G., Pelé, D., Bouville, C.: Faceengine: a 3d facial animation engine for real time applications. In: ACM Web3D Symposium, Paderborn, Germany (February 2000)
Cassel, J., Thorisson, K.: The power of a nod and a glance: envelope vs. emotional feedback in animated conversational agents. Applied Artificial intelligence 13, 519–538 (1999)
Cassel, J.: Embodied conversationnal interface agents. Communication of the ACM 43(4), 70–78 (2000)
Cassell, J., Vilhjálmsson, H., Bickmore, T.: BEAT: The behavior expression animation toolkit. In: Proc. of SIGGRAPH 2001, in Computer Graphics Proceedings, pp. 477–486 (2001)
Courty, N., Marchand, E.: Computer animation: a new application for image-based visual servoing. In: IEEE Int. Conf. on Robotics and Automation, Seoul, South Korea, May 2001, vol. 1, pp. 223–228. (2001)
Courty, N., Marchand, E., Arnaldi, B.: Through-the-eyes control of a virtual humanoïd. In: IEEE Int. Conf. on Computer Animation 2001, Seoul, Korea, November 2001, pp. 234–244 (2001)
Feraud, R., Bernier, O., Viallet, J.-E., Collobert, M.: A fast and accurate face detector based on neural networks. IEEE Trans. on Pattern Analysis and Machine Intelligence 23, 42–53 (2001)
Kuo, A., Zajac, F.: Human standing posture: Multi-joint movement strategies based on biomechanical constraints. Journal of Progress in Brain Research 97 (1993)
Morasso, P., Tagliasco, V. (eds.): Human Movement Understanding. North-Holland, Amsterdam (1986)
Poggi, I., Pelachaud, C., DeRosis, F.: Eye communication in a conversational 3d synthetic agent. Special Issue of Artificial Intelligence Communications 13, 169–181 (2000)
Robinson, D.: The mechanics of human saccadic eye movements. Journal of Physiology 174, 245–264 (1964)
Waters, K.: A muscle model for animating three-dimensional facial expression. In: Proceedings of Siggraph. Signal Processing and Communications Series, Anaheim, California (1987)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Courty, N., Breton, G., Pelé, D. (2003). Embodied in a Look: Bridging the Gap between Humans and Avatars. In: Rist, T., Aylett, R.S., Ballin, D., Rickel, J. (eds) Intelligent Virtual Agents. IVA 2003. Lecture Notes in Computer Science(), vol 2792. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-39396-2_19
Download citation
DOI: https://doi.org/10.1007/978-3-540-39396-2_19
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-20003-1
Online ISBN: 978-3-540-39396-2
eBook Packages: Springer Book Archive