Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2582051.2582091acmotherconferencesArticle/Chapter ViewAbstractPublication PagesahConference Proceedingsconference-collections
research-article

Augmenting expressivity of artificial subtle expressions (ASEs): preliminary design guideline for ASEs

Published: 07 March 2014 Publication History

Abstract

Unfortunately, there is little hope that information-providing systems will ever be perfectly reliable. The results of some studies have indicated that imperfect systems can reduce the users' cognitive load in interacting with them by expressing their level of confidence to users. Artificial subtle expressions (ASEs), which are machine-like artificial sounds to express the confidence information to users added just after the system's suggestions, were keenly focused on because of their simplicity and efficiency. The purpose of the work reported here was to develop a preliminary design guideline for ASEs in order to determine the expandability of ASEs. We believe that augmenting the expressivity of ASEs would lead reducing the users' cognitive load for processing the information provided from the systems, and this would also lead augmenting users' various cognitive capacities. Our experimental results showed that ASEs with decreasing pitch conveyed a low confidence level to users. This result were used to formulate a concrete design guideline for ASEs.

References

[1]
Antifakos, S., Kern, N., Shiele, B., and Schwaninger, A. Towards Improving Trust in Context Aware Systems by Displaying System Confidence, In Proc. MobileHCI'05, ACM Press (2005), 9--14.
[2]
Bellotti, V., and Edwards, K. Intelligibility and accountability: Human considerations in context-aware systems. Human-Computer Interaction 16, 2 (2001), 193--212.
[3]
Benzeghibaa, M., De Moria, R., Derooa, O., Dupont S., Erbesa, T., Jouveta, D., Fissorea, F., Lafacea, P., Mertinsa, A., Risa, C., Rosea, R., Tyagia, V., and Wellekensa, C. Automatic speech recognition and speech variability: A review, Speech Communication 49, 10--11 (2007), 763--786.
[4]
Blattner, M. M., Sumikawa, D. A. and Greenberg, R. M. Earcon and Icons: Their Structure and Common Design Principles. SIGCHI Bull 21, 1, ACM Press (1989), 123--124.
[5]
Brewster, S. A. Using Non-speech Sounds to Provide Navigation Cues. ACM Transactions on Computer-Human Interaction 5, 2, ACM Press (1998), 224--259.
[6]
Brewster, S. A., Wright, P. C. and Edwards, A. D. N. Experimentally derived guidelines for the creation of earcons. In Adjunct Proc of HCI95 (1995).
[7]
Cai, H. and Lin, Y. Tuning Trust Using Cognitive Cues for Better Human-Machine Collaboration, In Proc. HFES2010 (2010), 2437--2441(5).
[8]
Campione, E., and Véronis, J. A Large-Scale Multilingual Study of Silent Pause Duration, In Proc. Speech Prosody 2002 (2002), 199--202.
[9]
Cohen, M. H., Giangola, J. P., and Balogh, J. Voice User Interface Design, Addison-Wesley, MA, USA, 2004.
[10]
Edworthy, J. and Hards, R. Learning Auditory Warnings: The Effects of Sound Type, Verbal Labeling and Imagery on the Identification of Alarm Sounds. International Journal of Industrial Ergonomics 24, 5 (1999), 603--618.
[11]
Feng, J., and Sears, A. Using Confidence Scores to Improve Hands-Free Speech Based Navigation in Continuous Dictation Systems, ACM Transactions on Computer-Human Interaction 11, 4, ACM Press (2004), 329--356.
[12]
Gaver, W. W. Auditory Icons: Using Sound in Computer Interfaces. Human-Computer Interaction 2, 2 (1986), 167--177.
[13]
Gaver, W. W. The SonicFinder: An Interface That Uses Auditory Icons. Human-Computer Interaction 4, 1 (1989), 67--94.
[14]
Harrison, C., Horstman, J., Hsieh, G. and Hudson, S. E. Unlocking the Expressivity of Point Lights, In Proc. CHI'12, ACM Press (2012), 1683--1692.
[15]
Harrison, C., Hsieh, G., Willis, K. D. D., Forlizzi, J. and Hudson, S. E. Kinecticons: Using Iconicgraphic Motion in Graphical User Interface Design, In Proc. CHI'11, ACM Press (2011), 1999--2008.
[16]
Higashinaka, R., Sudoh, L., and Nakano, M.: Incorporating Discourse Features into Confidence Scoring of Intention Recognition Results in Spoken Dialogue Systems, Speech Communication 48, 3--4 (2006), 417--436.
[17]
Horvitz, E. Principles of mixed-initiative user interfaces, In Proc. CHI'99, ACM Press (1999), 159--166.
[18]
Horvitz, E., and Barry, M. Display of information for time-critical decision making, In Proc. 11th Conf. on Uncertainty in Artificial Intelligence, Morgan Kaufmann (1995), 296--305.
[19]
Keller, J. M. Development and use of the ARCS model of instructional design, Journal of Instructional Development 10, 3 (1987), 2--10.
[20]
Komatsu, T., Kobayashi, K., Yamada, S., Funakoshi, K. and Nakano, M. Effects of Different Types of Artifacts on Interpretations of Artificial Subtle Expressions (ASEs), In CHI'11 Ext. Abs (2011), 1249--1254.
[21]
Komatsu, T., Kobayashi, K., Yamada, S., Funakoshi, K. and Nakano, M. How Can We Live with Overconfident or Unconfident Systems?: A Comparison of Artificial Subtle Expressions with Human-like Expression, In Proc. CogSci2012 (2012), 1816--1821.
[22]
Komatsu, T., and Nagasaki, Y. Can we estimate the speaker's emotional state from her/his prosodic features?: Effects of F0 contour's slope and duration on perceiving disagreement, hesitation, agreement and attention, In Proc. ICA2004 (2004), 2227--2230.
[23]
Komatsu, T., Yamada, S., Kobayashi, K., Funakoshi, K. and Nakano, M. Artificial Subtle Expressions: Intuitive Notification Methodology for Artifacts, In Proc. CHI'10, ACM Press (2010), 1941--1944.
[24]
Krug, S. Don't make me think!, New Riders, CA, USA, 2005.
[25]
Nass, C. and Brave, S. Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship, The MIT Press, MA, USA, 2005.
[26]
Ogawa, A., and Nakamura, A. Joint estimation of confidence and error causes in speech recognition, Speech Communication 54, 9 (2012), 1014--1028.
[27]
Parasuraman, R. Human use and abuse of Automation. In M. Mouloua and J. Koonce, editors, Human-Automation Interaction: Research and Practice. Erlbaum Associates, 1997.
[28]
Re Roure, D. C. and Shadbolt, N. R. Capturing knowledge of user preferences: Ontologies in recommender systems, In Proc. K-CAP'01, ACM Press (2001), 100--107.
[29]
Sanderson, M. and Zobel, J. Information retrieval system evaluation: effort, sensitivity, and reliability, In Proc. SIGIR'05, ACM Press (2005), 162--169.
[30]
Walker, B. N. and Kramer, G. Mappings and Metaphors in Auditory Displays: An Experimental Assessment. ACM Transaction on Applied Perception 2, 4, ACM Press (2005), 407--412.
[31]
Yuan, J., Liberman, M., and Cieri, C. Towards an integrated understanding of speaking rate in conversation, In Proc. Interspeech 2006 (2006), 541--544.

Cited By

View all
  • (2017)Trust Lengthens Decision Time on Unexpected Recommendations in Human-agent InteractionProceedings of the 5th International Conference on Human Agent Interaction10.1145/3125739.3125751(245-252)Online publication date: 17-Oct-2017
  • (2016)Sound emblems for affective multimodal output of a robotic tutor: a perception studyProceedings of the 18th ACM International Conference on Multimodal Interaction10.1145/2993148.2993169(256-260)Online publication date: 31-Oct-2016

Index Terms

  1. Augmenting expressivity of artificial subtle expressions (ASEs): preliminary design guideline for ASEs

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      AH '14: Proceedings of the 5th Augmented Human International Conference
      March 2014
      249 pages
      ISBN:9781450327619
      DOI:10.1145/2582051
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      • MEET IN KOBE 21st Century: MEET IN KOBE 21st Century

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 07 March 2014

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. artificial subtle expressions (ASEs)
      2. confidence
      3. design guideline
      4. inflection pattern

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      AH '14
      Sponsor:
      • MEET IN KOBE 21st Century

      Acceptance Rates

      Overall Acceptance Rate 121 of 306 submissions, 40%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)1
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 10 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2017)Trust Lengthens Decision Time on Unexpected Recommendations in Human-agent InteractionProceedings of the 5th International Conference on Human Agent Interaction10.1145/3125739.3125751(245-252)Online publication date: 17-Oct-2017
      • (2016)Sound emblems for affective multimodal output of a robotic tutor: a perception studyProceedings of the 18th ACM International Conference on Multimodal Interaction10.1145/2993148.2993169(256-260)Online publication date: 31-Oct-2016

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media