Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3025453.3025649acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Response Times when Interpreting Artificial Subtle Expressions are Shorter than with Human-like Speech Sounds

Published: 02 May 2017 Publication History

Abstract

Artificial subtle expressions (ASEs) are machine-like expressions used to convey a system's confidence level to users intuitively. In this paper, we focus on the cognitive loads of users in interpreting ASEs in this study. Specifically, we assume that a shorter response time indicates less cognitive load, and we hypothesize that users will show a shorter response time when interpreting ASEs compared with speech sounds. We succeeded in verifying our hypothesis in a web-based investigation done to comprehend participants' cognitive loads by measuring their response times in interpreting ASEs and speeches.

References

[1]
Antifakos, S., Kern, N., Shiele, B., and Schwaninger, A. Towards Improving Trust in Context Aware Systems by Displaying System Confidence, In Proc. MobileHCI'05, ACM Press (2005), 9--14.
[2]
Beatty, J. Task-evoked pupillary responses, processing load, and the structure of processing resources, Psychological Bulletin 91, (1982), 276--292.
[3]
Bellotti, V. and Edwards, K. Intelligibility and accountability: Human considerations in context-aware systems, Human-Computer Interaction 16, 2 (2001), 193--212.
[4]
Benzeghibaa, M., De Moria, R., Derooa, O., Dupont S., Erbesa, T., Jouveta, D., Fissorea, F., Lafacea, P., Mertinsa, A., Risa, C., Rosea, R., Tyagia, V., and Wellekensa, C. Automatic speech recognition and speech variability: A review, Speech Communication 49, 10-11 (2007), 763--786.
[5]
Blattner, M. M., Sumikawa, D. A., and Greenberg, R. M. Earcons and Icons: Their Structure and Common Design Principles, SIGCHI Bull. 21, 1 (1989), 123--124.
[6]
Cai, H. and Lin, Y. Tuning Trust Using Cognitive Cues for Better Human-Machine Collaboration, In Proc. HFES2010, pp. 2437--2441(5).
[7]
Feng, J. and Sears, A. Using Confidence Scores to Improve Hands-Free Speech Based Navigation in Continuous Dictation Systems, ACM Transactions on Computer-Human Interaction 11, 4, ACM Press (2004), 329--356.
[8]
Fredericks, T. K., Choi, S. D., Hart, J., Butt, S. E., and Mital, A. An investigation of myocardial aerobic capacity as a measure of both physical and cognitive workloads, International Journal of Industrial Ergonomics 35 (12), (2005), 1097--1107.
[9]
Horvitz, E., and Barry, M. Display of information for time-critical decision making, In Proc. 11th Conf. on Uncertainty in Artificial Intelligence, Morgan Kaufmann (1995), 296--305.
[10]
Horvitz, E. Principles of mixed-initiative user interfaces, In Proc. CHI'99, ACM Press (1999), 159--166.
[11]
Hussain, S., Chen, S., Calvo, R. A., and Chen, F. Classification of cognitive load from task performance & multichannel physiology during affective changes, In Proc. MMCogEmS: Inferring Cognitive and Emotional States from Multimodal Measures (ICMI 2011 Workshop) (2011).
[12]
Kahneman, D. Attention and effort, Prentice-Hall, New Jersey, (1973).
[13]
Katehakis, M, N. and Veinott, A, F, Jr. The MultiArmed Bandit Problem: Decomposition and Computation, Mathematics of Operation Research 12, 2 (1987), 262--268.
[14]
Komatsu, T., Yamada, S., Kobayashi, K., Funakoshi, K., and Nakano, M. Artificial Subtle Expressions: Intuitive Notification Methodology for Artifacts, In Proc. CHI'10, ACM Press (2010), 1941--1944.
[15]
Komatsu, T., Kobayashi, K., Yamada, S., Funakoshi, K., and Nakano, M. How Can We Live with Overconfident or Unconfident Systems? A Comparison of Artificial Subtle Expressions with Human-like Expression, In Proc. CogSci2012 (2012), 1816--1821.
[16]
Komatsu, T., Kobayashi, K., Yamada, S., Funakoshi, K., and Nakano, M. Investigating Ways of Interpretations of Artificial Subtle Expressions Among Different Languages: A Case of Comparison Among Japanese, German, Portuguese and Mandarin Chinese, In Proc. CogSci2015 (2015), 1159--1164.
[17]
Komarov, S., Reinecke, K., and Gajos, K. Z. Crowdsourcing performance evaluations of user interfaces, In Proc. CHI'13, ACM Press (2013), 207216.
[18]
Liu, B. and Lane, I. Joint Online Spoken Language Understanding and Language Modeling with Recurrent Neural Networks, In Proc. SIGDIAL'16, (2016), 22--30.
[19]
Navon, D. and Gopher, D. On the economy of the human-processing system, Psychological Review 86, (1979), 214--255.
[20]
Noyes, J. M., Hellier, E., and Edworthy, J. Speech warnings: a review, Theoretical Issues in Ergonomics Science 7, 6 (2006), 551--571.
[21]
Mesnil, G., Dauphin, Y., Yao, K., Bengio, Y., Deng, L., Hakkani-Tur, D., He, X., Heck, L., Tur, G., Yu, D., and Zweig, G. Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding, IEEE Transactions on Audio, Speech, and Language Processing 23 (3), (2015), 530--539.
[22]
Ogawa, A. and Nakamura, A. Joint estimation of confidence and error causes in speech recognition, Speech Communication 54, 9 (2012), 1014--1028.
[23]
Paas, F. G. W. C. and Van Merrienboer, J. J. G. The Efficiency of Instructional Conditions: An Approach to Combine Mental Effort and Performance Measures, Human Factors: the Journal of the Human Factors and Ergonomics Society 35 (4), (1993), 737--743.
[24]
Sweller, J. Cognitive Load During Problem Solving: Effects on Learning, Cognitive Science 12, (1998), 257--285.
[25]
Vilimek, R., and Hempel, T. Effects of speech and non-speech sounds on short-term memory and possible implications for in-vehicle use, In Proc. ICAD 2005, (2005).

Cited By

View all

Index Terms

  1. Response Times when Interpreting Artificial Subtle Expressions are Shorter than with Human-like Speech Sounds

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '17: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems
      May 2017
      7138 pages
      ISBN:9781450346559
      DOI:10.1145/3025453
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 02 May 2017

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. artificial subtle expressions (ases)
      2. cognitive loads
      3. response time
      4. speech

      Qualifiers

      • Research-article

      Funding Sources

      • Swedish Governmental Agency for Innovation Systems (VINNOVA)

      Conference

      CHI '17
      Sponsor:

      Acceptance Rates

      CHI '17 Paper Acceptance Rate 600 of 2,400 submissions, 25%;
      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Upcoming Conference

      CHI 2025
      ACM CHI Conference on Human Factors in Computing Systems
      April 26 - May 1, 2025
      Yokohama , Japan

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 436
        Total Downloads
      • Downloads (Last 12 months)28
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 02 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media