Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/1878083.1878096acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Towards an expressive virtual tutor: an implementation of a virtual tutor based on an empirical study of non-verbal behaviour

Published: 29 October 2010 Publication History

Abstract

In this paper we investigate the non-verbal behaviour of a tutor and propose a model for ECAs (Embodied Conversational Agents) acting as virtual tutors. We have conducted an empirical study where we focused on the distribution of gaze, head and eyebrowns behaviour of the tutors in a teaching scenario and on the co-occurrences of these behaviours with certain teaching activities or conversational events. Further, we built an ECA with conversational capabilities, episodic memory, emotions and expressive behaviour based on the result from the empirical study.

References

[1]
E. A., Z. X., K. S., and M.-T. N. Emotional communication with virtual humans. pages 243--263, 2003.
[2]
M. Argyle, M. Cook, and M. Argyle. Gaze and mutual gaze. Cambridge University Press Cambridge, 1976.
[3]
J. L. Bentley. K-d trees for semidynamic point sets. In SCG '90: Proceedings of the sixth annual symposium on Computational geometry, pages 187--197, New York, NY, USA, 1990. ACM.
[4]
C. Busso, Z. Deng, U. Neumann, and S. Narayanan. Learning expressive human-like head motion sequences from speech. Data-Driven 3D Facial Animation, pages 113--131, 2007.
[5]
N. Chovil. Discourse-oriented facial displays in conversation. Research on Language & Social Interaction, 25(1):163--194, 1991.
[6]
M. Core and J. Allen. Coding dialogs with the DAMSL annotation scheme. In AAAI Fall Symposium on Communicative Action in Humans and Machines, pages 28--35, 1997.
[7]
Z. Deng and U. Neumann. efase: expressive facial animation synthesis and editing with phoneme-isomap controls. In SCA '06: Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation, pages 251--260, Aire-la-Ville, Switzerland, Switzerland, 2006. Eurographics Association.
[8]
A. Fukayama, T. Ohno, N. Mukawa, M. Sawaki, and N. Hagita. Messages embedded in gaze of interface agents - impression management with agent's gaze. In CHI '02: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 41--48, New York, NY, USA, 2002. ACM.
[9]
S. Garchery, A. Egges, and N. Magnenat-Thalmann. Fast facial animation design for emotional virtual humans. In Proc. Measuring Behaviour, Wageningen, N.L. CD-ROM Proceeding, September 2005.
[10]
A. Graesser, P. Chipman, B. Haynes, and A. Olney. AutoTutor: An intelligent tutoring system with mixed-initiative dialogue. IEEE Transactions on Education, 48(4):612--618, 2005.
[11]
D. Heylen, I. Es, A. Nijholt, and B. Dijk. Controlling the gaze of conversational agents. Advances in Natural Multimodal Dialogue Systems, pages 245--262, 2005.
[12]
Z. Kasap, a. C. P. Ben Moussa, M., and N. Magnenat-Thalmann. Making them remember: Emotional virtual characters with memory. IEEE Computer Graphics and Applications, 29(2):20--29, 2009.
[13]
Z. Kasap and N. Magnenat-Thalmann. Towards episodic memory based long-term a affective interaction with a human-like robot. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2010) (to appear), 2010.
[14]
A. Kendon. Some functions of gaze-direction in social interaction. Acta psychologica, 26:22--63, 1967.
[15]
M. Kipp. Multimedia Annotation, Querying and Analysis in ANVIL. Ed. by M. Maybury. MIT Press, 2010.
[16]
S. Kshirsagar. A multilayer personality model. In SMARTGRAPH '02: Proceedings of the 2nd international symposium on Smart graphics, pages 107--115, New York, NY, USA, 2002. ACM.
[17]
S. Kshirsagar, T. Molet, and N. Magnenat-Thalmann. Principal components of expressive speech animation. In Proc. Computer Graphics International 2001, pages 38--44. IEEE Publisher, February 2001.
[18]
S. P. Lee, J. B. Badler, and N. I. Badler. Eyes alive. In SIGGRAPH '02: Proceedings of the 29th annual conference on Computer graphics and interactive techniques, pages 637--644, New York, NY, USA, 2002. ACM.
[19]
A. Mademlis, A. Axenopoulos, P. Daras, D. Tzovaras, and M. Strintzis. 3D content-based search based on 3D Krawtchouk moments. In Proceedings of the Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06), pages 743--749. IEEE Computer Society, 2006.
[20]
R. R. Mccrae and P. John. An introduction to the five-factor model and its applications. Journal of Personality, 60:175--215, 1992.
[21]
A. Mehrabian. Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in temperament. Current Psychology, 14:261--292, 1996.
[22]
Y. I. Nakano, G. Reinstein, T. Stocky, and J. Cassell. Towards a model of face-to-face grounding. In ACL '03: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, pages 553--561, Morristown, NJ, USA, 2003. Association for Computational Linguistics.
[23]
A. Ortony, G. L. Clore, and A. Collins. The Cognitive Structure of Emotions. Cambridge University Press, 1988.
[24]
J. Rickel and W. Johnson. Animated agents for procedural training in virtual reality: Perception, cognition, and motor control. Applied Artificial Intelligence, 13(4):343--382, 1999.
[25]
H. Smith. Nonverbal communication in teaching. Review of Educational Research, 49(4):631, 1979.
[26]
C. Teh and R. Chin. On image analysis by the methods of moments. IEEE Transactions on pattern analysis and machine intelligence, 10(4):496--513, 1988.
[27]
O. E. Torres, J. Cassell, and S. Prevost. Modeling gaze behavior as a function of discourse structure. In In Proceedings Of The First International Workshop On Human-Computer Conversations, 1997.
[28]
P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. pages 511--518, 2001.

Cited By

View all
  • (2023)FaceXHuBERT: Text-less Speech-driven E(X)pressive 3D Facial Animation Synthesis Using Self-Supervised Speech Representation LearningProceedings of the 25th International Conference on Multimodal Interaction10.1145/3577190.3614157(282-291)Online publication date: 9-Oct-2023
  • (2022)Investigating how speech and animation realism influence the perceived personality of virtual characters and agents2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)10.1109/VR51125.2022.00018(11-20)Online publication date: Mar-2022
  • (2021)A Virtual Clown Behavior Model Based on Emotional Biologically Inspired Cognitive ArchitectureAdvances in Neural Computation, Machine Learning, and Cognitive Research V10.1007/978-3-030-91581-0_14(99-108)Online publication date: 23-Nov-2021
  • Show More Cited By

Index Terms

  1. Towards an expressive virtual tutor: an implementation of a virtual tutor based on an empirical study of non-verbal behaviour

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SMVC '10: Proceedings of the 2010 ACM workshop on Surreal media and virtual cloning
      October 2010
      76 pages
      ISBN:9781450301756
      DOI:10.1145/1878083
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 29 October 2010

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. eca
      2. gaze
      3. iapd
      4. non-verbal behaviour
      5. tutor

      Qualifiers

      • Research-article

      Conference

      MM '10
      Sponsor:
      MM '10: ACM Multimedia Conference
      October 29, 2010
      Firenze, Italy

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)13
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 12 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)FaceXHuBERT: Text-less Speech-driven E(X)pressive 3D Facial Animation Synthesis Using Self-Supervised Speech Representation LearningProceedings of the 25th International Conference on Multimodal Interaction10.1145/3577190.3614157(282-291)Online publication date: 9-Oct-2023
      • (2022)Investigating how speech and animation realism influence the perceived personality of virtual characters and agents2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)10.1109/VR51125.2022.00018(11-20)Online publication date: Mar-2022
      • (2021)A Virtual Clown Behavior Model Based on Emotional Biologically Inspired Cognitive ArchitectureAdvances in Neural Computation, Machine Learning, and Cognitive Research V10.1007/978-3-030-91581-0_14(99-108)Online publication date: 23-Nov-2021
      • (2018)A Study on the Deployment of a Service Robot in an Elderly Care CenterInternational Journal of Social Robotics10.1007/s12369-018-0492-5Online publication date: 10-Nov-2018
      • (undefined)Value Co-Creation in Smart Services: A Functional Affordances Perspective on Smart Personal AssistantsSSRN Electronic Journal10.2139/ssrn.3923706

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media