Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2663204.2666283acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
extended-abstract

Realizing Robust Human-Robot Interaction under Real Environments with Noises

Published: 12 November 2014 Publication History

Abstract

A human speaker considers her interlocutor's situation when she determines to begin speaking in human-human interaction. We assume this tendency is also applicable to human-robot interaction when a human treats a humanoid robot as a social being and behaves as a cooperative user. As a part of this social norm, we have built a model of predicting when a user is likely to begin speaking to a humanoid robot. This proposed model can be used to prevent a robot from generating erroneous reactions by ignoring input noises. In my Ph.D. thesis, we will realize robust human-robot interaction under real environments with noises. To achieve this, we began constructing a robot dialogue system using multiple modalities, such as audio and visual, and the robot's posture information. We plan to: 1) construct a robot dialogue system, 2) develop systems using social norms, such as an input sound classifier, controlling user's untimely utterances, and estimating user's degree of urgency, and 3) extend it from a one-to-one dialogue system to a multi-party one.

References

[1]
C. Bregler and Y. Konig. Eigenlips for robust speech recognition. International Computer Science Institute, 2:669--672, 1994.
[2]
S. Duncan. Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology, 23:283--292, 1972.
[3]
A. Kendon. Some functions of gaze direction in social interaction. Acta Psychol, 26:22--63, 1967.
[4]
W. Kim and H. Ko. Noise variance estimation for Kalman filtering of noisy speech. IEICE Transactions on Information and Systems, E84-D(1):155--160, 2001.
[5]
K. Komatani, S. Ueno, T. Kawahara, and H. G. Okuno. User modeling in spoken dialogue systems to generate flexible guidance. User Modeling and User-Adapted Interaction, 15(1):169--183, 2005.
[6]
I. Kruijff-Korbayová, H. Cuayáhuitl, B. Kiefer, M. Schrüder, P. Cosi, G. Paci, G. Sommavilla, F. Tesser, H. Sahli, G. Athanasopoulos, W. Wang, V. Enescu, and W. Verhelst. Spoken language processing in a conversational system for child-robot interaction. In Proc. of the INTERSPEECH Workshop on Child-Computer Interaction, pages 132--134, 2012.
[7]
A. Lee, K. Nakamura, R. Nisimura, H. Saruwatari, and K. Shikano. Noise robust real world spoken dialogue system using GMM based rejection of unintended inputs. In Proc. Interspeech, pages 173--176, 2004.
[8]
H. Sacks, E. A. Schegloff, and G. Jefferson. A simplest systematics for the organization of turn-taking for conversation. Language, 50(4):696--735, 1974.
[9]
G. Skantze and J. Gustafson. Attention and interaction control in a human-human-computer dialogue setting. In Proc. SIGDIAL 2009 Conference, pages 310--313, 2009.
[10]
T. Sugiyama, K. Komatani, and S. Sato. Predicting when people will speak to a humanoid robot. In Proc. International Workshop on Spoken Dialog Systems, 2012.
[11]
T. Sugiyama, K. Komatani, and S. Sato. Evaluating model that predicts when people will speak to a humanoid robot and handling variations of individuals and instructions. In Proc. International Workshop on Spoken Dialog Systems, pages 62--72, 2014.
[12]
R. Vertegaal, R. Slagter, G. C. van der Veer, and A. Nijholt. Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes. In Proc. SIGCHI conference on Human factors in computing systems, pages 301--308, 2001.
[13]
Y. Yang. An evaluation of statistical approaches to text categorization. Information Retrieval, 1:69--90, 1999.
[14]
S. Yoon and C. D. Yoo. Speech enhancement based on speech/noise-dominant decision. IEICE Transactions on Information and Systems, E85-D(4):744--750, 2002.

Cited By

View all
  • (2021)Dependability and Safety: Two Clouds in the Blue Sky of Multimodal InteractionProceedings of the 2021 International Conference on Multimodal Interaction10.1145/3462244.3479881(781-787)Online publication date: 18-Oct-2021

Index Terms

  1. Realizing Robust Human-Robot Interaction under Real Environments with Noises

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMI '14: Proceedings of the 16th International Conference on Multimodal Interaction
    November 2014
    558 pages
    ISBN:9781450328852
    DOI:10.1145/2663204
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 12 November 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. human-robot interaction
    2. spoken dialogue systems

    Qualifiers

    • Extended-abstract

    Conference

    ICMI '14
    Sponsor:

    Acceptance Rates

    ICMI '14 Paper Acceptance Rate 51 of 127 submissions, 40%;
    Overall Acceptance Rate 453 of 1,080 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)8
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 15 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2021)Dependability and Safety: Two Clouds in the Blue Sky of Multimodal InteractionProceedings of the 2021 International Conference on Multimodal Interaction10.1145/3462244.3479881(781-787)Online publication date: 18-Oct-2021

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media