Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2468356.2468707acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
short-paper

Development of a taxonomy to improve human-robot-interaction through multimodal robot feedback

Published: 27 April 2013 Publication History

Abstract

The adequacy of a robot's feedback is crucial for advan-ced human-robot interaction (HRI). We investigate how multimodality can add value for better cooperation in a future in which robots will co-exist with us. There is a lot of research around service robots and many scena-rios are being worked on: Robots that help us, cooper-ate with us, or even ones that eventually need our assistance. For smooth interaction it is necessary for humans to understand the robot, which is why I pro-pose a taxonomy of robot feedback. The taxonomy, which is mapped out in an iterative process, is aimed at providing a deeper understanding of feedback in general and to improve the specific area of HRI.

References

[1]
Austermann, A. Learning to understand multimodal commands and feedback for human-robot interaction. PhD Thesis, The Graduate University for Advanced Studies (SOKENDAI), (2010).
[2]
Baldwin, T., Chai, J., Kirchhoff, K. Communicative gestures in conference identification in multiparty meetings. In Proc. ICMI-MLMI 2009, ACM (2009), 211218.
[3]
Bohus D., and Horvitz, E. Facilitating multiparty dialogue with gaze, gesture, and speech. In Proc. ICMIMLMI 2010, Workshop on Machine Learning for Multimodal Interaction, ACM (2010), 5.
[4]
Breazeal, C., Kidd, C., Thomaz, A., Hoffman, G., and Berlin, M. Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In Proc. IROS 2005, IEEE (2005), 708--713.
[5]
Cramer, H. People's responses to autonomous and adaptive systems. PhD Thesis, University of Amsterdam (2010).
[6]
Kaaresoja, T., and Brewster, S. Feedback is & late: measuring multimodal delays in mobile device touchscreen interaction. In Proc. ICMI-MLMI 2010, Workshop on Machine Learning for Multimodal Interaction, ACM (2010), 2.
[7]
Kiesler, S. and Goetz, J. Mental models of robotic assistants. In Proc. CHI 2002, ACM (2002), 576--577.
[8]
Kulyukin, V.A. On natural language dialogue with assistive robots. In Proc. HRI 2006, ACM (2006), 164171.
[9]
Lakoff, G., and Johnson, M. Metaphors we live by, Vol. 111, Chicago London (1980).
[10]
Lee, M.K. and Makatchev, M. How do people talk with a robot?: An analysis of human-robot dialogues in the real world. In Proc. CHI 2009, ACM (2009), 37693774.
[11]
Liu, C., Ishi, C. Ishiguro, H., and Hagita, N. Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction. In Proc. HRI 2012, IEEE (2012), 285--292.
[12]
Lohse, M., Hegel, F. and Wrede, B. Domestic applications for social robots -- a user study on appearance and function. Journal of Physical Agents 2, 2 (2008), 21--32.
[13]
Mirnig, N., Gonsior, B., Sosnowski, S., Landsiedel, C., Wollherr, D., Weiss, A., and Tscheligi, M. Feedback guidelines for multimodal human-robot interaction: how should a robot give feedback when asking for directions? In Proc. RO-MAN 2012, IEEE (2012), 533--538.
[14]
Mirnig, N., Weiss, A., Skantze, G., Al Moubayed S., Gustafson, J., Beskow, J., Granström, B., and Tscheligi, M. What do we actually talk about when speaking to a robot face-to-face? Currently submitted to: International Journal of Humanoid Robots, Special Issue on Face-to-Face Communication with Humanoid Robots.
[15]
Reeves, B. and Nass, C. The media equation: How people treat computers, television, and new media like real people and places. NY, US: CUP (1996).
[16]
Salem, M., Rolfing, K., Kopp, S., and Joublin, F. A friendly gesture: investigating the effect of multimodal robot behavior in human-robot interaction. In Proc. ROMAN 2011, IEEE (2011), 247--252.
[17]
Szafir, D. and Mutlu, B. Pay attention!: designing adaptive agents that monitor and improve user engagement. In Proc. CHI 2012, ACM (2012), 11--20.

Index Terms

  1. Development of a taxonomy to improve human-robot-interaction through multimodal robot feedback

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI EA '13: CHI '13 Extended Abstracts on Human Factors in Computing Systems
    April 2013
    3360 pages
    ISBN:9781450319522
    DOI:10.1145/2468356
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 April 2013

    Check for updates

    Author Tags

    1. feedback modalities
    2. human-robot communication
    3. multimodal feedback
    4. robot feedback taxonomy.

    Qualifiers

    • Short-paper

    Conference

    CHI '13
    Sponsor:

    Acceptance Rates

    CHI EA '13 Paper Acceptance Rate 630 of 1,963 submissions, 32%;
    Overall Acceptance Rate 6,164 of 23,696 submissions, 26%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 280
      Total Downloads
    • Downloads (Last 12 months)16
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 03 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media