Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1007/978-3-642-34584-5_9guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Furhat: a back-projected human-like robot head for multiparty human-machine interaction

Published: 21 February 2011 Publication History

Abstract

In this chapter, we first present a summary of findings from two previous studies on the limitations of using flat displays with embodied conversational agents (ECAs) in the contexts of face-to-face human-agent interaction. We then motivate the need for a three dimensional display of faces to guarantee accurate delivery of gaze and directional movements and present Furhat, a novel, simple, highly effective, and human-like back-projected robot head that utilizes computer animation to deliver facial movements, and is equipped with a pan-tilt neck. After presenting a detailed summary on why and how Furhat was built, we discuss the advantages of using optically projected animated agents for interaction. We discuss using such agents in terms of situatedness, environment, context awareness, and social, human-like face-to-face interaction with robots where subtle nonverbal and social facial signals can be communicated. At the end of the chapter, we present a recent application of Furhat as a multimodal multiparty interaction system that was presented at the London Science Museum as part of a robot festival,. We conclude the paper by discussing future developments, applications and opportunities of this technology.

References

[1]
Dominik, Z.: Who did actually invent the word robotand what does it mean? The Karel Čapek website, http://capek.misto.cz/english/robot.html (retrieved December 10, 2011)
[2]
Summerfield, Q.: Lipreading and audio-visual speech perception. Philosophical Transactions: Biological Sciences 335(1273), 71-78 (1992)
[3]
Al Moubayed, S., Beskow, J.: Effects of Visual Prominence Cues on Speech Intelligibility. In: Proceedings of Auditory-Visual Speech Processing, AVSP 2009, Norwich, England (2009)
[4]
Argyle, M., Cook, M.: Gaze and mutual gaze. Cambridge University Press (1976)
[5]
Kleinke, C. L.: Gaze and eye contact: a research review. Psychological Bulletin 100, 78-100 (1986)
[6]
Ekman, P., Friesen, W. V.: Unmasking the face: A guide to recognizing emotions from facial clues. Malor Books (2003) ISBN: 978-1883536367
[7]
Shinozawa, K., Naya, F., Yamato, J., Kogure, K.: Differences in effect of robot and screen agent recommendations on human decision-making. International Journal of Human Computer Studies 62(2), 267-279 (2005)
[8]
Mori, M.: Bukimi no tani.:The uncanny valley (K. F. MacDorman & T. Minato, Trans.). Energy 7(4), 33-35 (1970) (Originally in Japanese)
[9]
Gockley, R., Simmons, J., Wang, D., Busquets, C., DiSalvo, K., Caffrey, S., Rosenthal, J., Mink, S., Thomas, W., Adams, T., Lauducci, M., Bugajska, D., Perzanowski, Schultz, A.: Grace and George: Social Robots at AAAI. In: Proceedings of AAAI 2004, Mobile Robot Competition Workshop, pp. 15-20. AAAI Press (2004)
[10]
Edlund, J., Al Moubayed, S., Beskow, J.: The Mona Lisa Gaze Effect as an Objective Metric for Perceived Cospatiality. In: Vilhjálmsson, H. H., Kopp, S., Marsella, S., Thórisson, K. R. (eds.) IVA 2011. LNCS (LNAI), vol. 6895, pp. 439-440. Springer, Heidelberg (2011)
[11]
Todorovi, D.: Geometrical basis of perception of gaze direction. Vision Research 45(21), 3549-3562 (2006)
[12]
Raskar, R., Welch, G., Low, K.-L., Bandyopadhyay, D.: Shader lamps: animating real objects with image-based illumination. In: Proc. of the 12th Eurographics Workshop on Rendering Techniques, pp. 89-102 (2001)
[13]
Lincoln, P., Welch, G., Nashel, A., Ilie, A., State, A., Fuchs, H.: Animatronic shader lamps avatars. In: Proc. of the 2009 8th IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2009). IEEE Computer Society, Washington, DC (2009)
[14]
Al Moubayed, S., Edlund, J., Beskow, J.: Taming Mona Lisa: Communicating gaze faithfully in 2D and 3D facial projections. ACM Trans. Interact. Intell. Syst. 1(2), Article 11, 25 pages (2012)
[15]
Al Moubayed, S., Skantze, G.: Turn-taking Control Using Gaze in Multiparty Human-Computer Dialogue: Effects of 2D and 3D Displays. In: Proceedings of the International Conference on Auditory-Visual Speech Processing AVSP, Florence, Italy (2011)
[16]
Al Moubayed, S., Beskow, J., Edlund, J., Granström, B., House, D.: Animated Faces for Robotic Heads: Gaze and Beyond. In: Esposito, A., Vinciarelli, A., Vicsi, K., Pelachaud, C., Nijholt, A. (eds.) Communication and Enactment 2010. LNCS, vol. 6800, pp. 19-35. Springer, Heidelberg (2011)
[17]
Beskow, J.: Talking heads - Models and applications for multimodal speech synthesis. Doctoral dissertation, KTH (2003)
[18]
Beskow, J.: Animation of talking agents. In: Benoit, C., Campbel, R. (eds.) Proc of ESCA Workshop on Audio-Visual Speech Processing, Rhodes, Greece, pp. 149-152 (1997)
[19]
Granström, B., House, D.: Modeling and evaluating verbal and non-verbal communication in talking animated interface agents. In: Dybkjaer, l., Hemsen, H., Minker, W. (eds.) Evaluation of Text and Speech Systems, pp. 65-98. Springer (2007)
[20]
Al Moubayed, S., Beskow, J., Granström, B.: Auditory-Visual Prominence: From Intelligibility to Behavior. Journal on Multimodal User Interfaces 3(4), 299-311 (2010)
[21]
Brouwer, D. M., Bennik, J., Leideman, J., Soemers, H. M. J. R., Stramigioli, S.: Mechatronic Design of a Fast and Long Range 4 Degrees of Freedom Humanoid Neck. In: Proceedings of ICRA, Kobe, Japan, ThB8.2, pp. 574-579 (2009)
[22]
Harel, D.: Statecharts: A visual formalism for complex systems. Science of Computer Programming 8(3), 231-274 (1987)
[23]
Blackwell, R. D., Hensel, J. S., Sternthal, B.: Pupil dilation: What does it measure? Journal of Advertising Research 10, 15-18 (1970)
[24]
Nishino, K., Nayar, S. K.: Corneal Imaging System: Environment from Eyes. Int. J. Comput. Vision 70(1), 23-40 (2006).

Cited By

View all
  • (2024)A Framework to Design Engaging Interactions in Socially Assistive Robots to Mitigate Dementia-Related SymptomsACM Transactions on Human-Robot Interaction10.1145/370088914:1(1-25)Online publication date: 18-Oct-2024
  • (2024)Balancing Human Likeness in Social Robots: Impact on Children’s Lexical Alignment and Self-disclosure for Trust AssessmentACM Transactions on Human-Robot Interaction10.1145/365906213:4(1-27)Online publication date: 23-Oct-2024
  • (2024)Conformity and Trust in Multi-party vs. Individual Human-Robot InteractionProceedings of the 24th ACM International Conference on Intelligent Virtual Agents10.1145/3652988.3673954(1-9)Online publication date: 16-Sep-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
COST'11: Proceedings of the 2011 international conference on Cognitive Behavioural Systems
February 2011
448 pages
ISBN:9783642345838
  • Editors:
  • Anna Esposito,
  • Antonietta M. Esposito,
  • Alessandro Vinciarelli,
  • Rüdiger Hoffmann,
  • Vincent C. Müller

Sponsors

  • SSPNET: Social Signal Processing Network
  • International Institute for Advanced Scientific Studies "E.R. Caianiello": International Institute for Advanced Scientific Studies "E.R. Caianiello"
  • European COST Action 2102: European COST Action 2102
  • Second University of Naples: Second University of Naples
  • SERN: Società Italiana Reti Neuroniche

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 21 February 2011

Author Tags

  1. Furhat
  2. Gaze
  3. Gaze perception
  4. Mona Lisa effect
  5. Robot heads
  6. avatar
  7. back projection
  8. dialogue system
  9. facial animation
  10. multimodal interaction
  11. multiparty interaction
  12. situated interaction
  13. talking heads

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)A Framework to Design Engaging Interactions in Socially Assistive Robots to Mitigate Dementia-Related SymptomsACM Transactions on Human-Robot Interaction10.1145/370088914:1(1-25)Online publication date: 18-Oct-2024
  • (2024)Balancing Human Likeness in Social Robots: Impact on Children’s Lexical Alignment and Self-disclosure for Trust AssessmentACM Transactions on Human-Robot Interaction10.1145/365906213:4(1-27)Online publication date: 23-Oct-2024
  • (2024)Conformity and Trust in Multi-party vs. Individual Human-Robot InteractionProceedings of the 24th ACM International Conference on Intelligent Virtual Agents10.1145/3652988.3673954(1-9)Online publication date: 16-Sep-2024
  • (2024)Design of a Multimodal Robot-Based Conversational Interface: A Case Study with FURHATHCI International 2024 – Late Breaking Papers10.1007/978-3-031-76803-3_17(299-311)Online publication date: 29-Jun-2024
  • (2024)A Conversational Robot for Children’s Access to a Cultural Heritage Multimedia ArchiveAdvances in Information Retrieval10.1007/978-3-031-56069-9_11(144-151)Online publication date: 24-Mar-2024
  • (2023)That's not a Good Idea: A Robot Changes Your Behavior Against Social EngineeringProceedings of the 11th International Conference on Human-Agent Interaction10.1145/3623809.3623879(63-71)Online publication date: 4-Dec-2023
  • (2023)Co-Design of a Robotic Mental Well-Being Coach to Help University Students Manage Public Speaking AnxietyProceedings of the 11th International Conference on Human-Agent Interaction10.1145/3623809.3623872(200-208)Online publication date: 4-Dec-2023
  • (2023)Generative Facial Expressions and Eye Gaze Behavior from Prompts for Multi-Human-Robot InteractionAdjunct Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology10.1145/3586182.3616623(1-3)Online publication date: 29-Oct-2023
  • (2023)I Learn Better Alone! Collaborative and Individual Word Learning With a Child and Adult RobotProceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3568162.3577004(368-377)Online publication date: 13-Mar-2023
  • (2023)Children’s Trust in Robots and the Information They ProvideExtended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544549.3585801(1-7)Online publication date: 19-Apr-2023
  • Show More Cited By

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media