Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2909824.3020230acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

Evaluating Effects of User Experience and System Transparency on Trust in Automation

Published: 06 March 2017 Publication History

Abstract

Existing research assessing human operators' trust in automation and robots has primarily examined trust as a steady-state variable, with little emphasis on the evolution of trust over time. With the goal of addressing this research gap, we present a study exploring the dynamic nature of trust. We defined trust of entirety as a measure that accounts for trust across a human's entire interactive experience with automation, and first identified alternatives to quantify it using real-time measurements of trust. Second, we provided a novel model that attempts to explain how trust of entirety evolves as a user interacts repeatedly with automation. Lastly, we investigated the effects of automation transparency on momentary changes of trust. Our results indicated that trust of entirety is better quantified by the average measure of "area under the trust curve" than the traditional post-experiment trust measure. In addition, we found that trust of entirety evolves and eventually stabilizes as an operator repeatedly interacts with a technology. Finally, we observed that a higher level of automation transparency may mitigate the "cry wolf" effect -- wherein human operators begin to reject an automated system due to repeated false alarms.

References

[1]
Murphy, R. R. 2004. Human-robot interaction in rescue robotics. IEEE Transactions on Systems, Man, and Cybernetics, 34, 2, 138--153.
[2]
Girard, A. R., Howell A. S., and Hedrick, J. K. 2004. Border patrol and surveillance missions using multiple unmanned air vehicles. The 43rd IEEE Conference on Decision and Control, 620--625.
[3]
Casbeer, D. W., Kingston, D. B., Beard, R. W. and McLain, T. W. 2006. Cooperative forest fire surveillance using a team of small unmanned air vehicles. International Journal of Systems Science, 37, 6, 351--360.
[4]
Chen, J. Y. C. 2010. Robotics operator performance in a multi-tasking environment: Human-Robot Interactions in Future Military Operations. Ashgate Publishing, 294--314.
[5]
Chen, J. Y. C. and Barnes, M. J. 2008. Robotics operator performance in a military multi-tasking environment. The 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI '08), 279--286.
[6]
Talamadupula, K., Briggs, G. Chakraborti, T. Scheutz M. and Kambhampati, S. 2014. Coordination in human-robot teams using mental modeling and plan recognition, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2957--2962.
[7]
Pratt, G. and Manzo, J. 2013. The DARPA Robotics Challenge {Competition}. IEEE Robotics & Automation Magazine, 20, 2, 10--12.
[8]
Lee, J. D. and See, K. A. 2004. Trust in technology: Designing for appropriate reliance. Human Factors, 46, 1, 50--80.
[9]
Hoff, K. A. and Bashir, M. 2015. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Human Factors, 57, 3, 407--434.
[10]
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J. and Parasuraman, R. 2016. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Human Factors, 53, 5, 517--527.
[11]
Wickens, C. D., Hollands, J. G., Banbury, S. and Parasuraman, R. 2013. Engineering Psychology & Human Performance. Pearson Education.
[12]
Robinette, P., Li, W., Allen, R., Howard, A. M. and Wagner, A. R. 2016. Overtrust of Robots in Emergency Evacuation Scenarios. The 11th ACM/IEEE International Conference on Human Robot Interaction (HRI '16), 101--108.
[13]
Lee, J. D. and Moray, N. 1992. Trust, control strategies and allocation of function in human-machine systems. Ergonomics, 35, 10, 1243--1270.
[14]
Lee, J. D. and Moray, N. 1994. Trust, self-confidence, and operators' adaptation to automation. International Journal of Human Computer Studies, 40, 1, 153--184.
[15]
Yang, X. J., Wickens, C. D. and Hölttä-Otto, K. 2016. How users adjust trust in automation: Contrast effect and hindsight bias. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 60, 1, 196--200.
[16]
Manzey, D., Reichenbach, J. and Onnasch, L. 2012. Human Performance Consequences of Automated Decision Aids: The Impact of Degree of Automation and System Experience. Journal of Cognitive Engineering and Decision Making, 6, 1, 57--87.
[17]
Desai, M., Kaniarasu, P., Medvedev, M., Steinfeld, A. and Yanco, H. 2013. Impact of robot failures and feedback on real-time trust. The 8th ACM/IEEE international conference on Human-robot interaction (HRI '13), 251--258.
[18]
Muir, B. M. 1987. Trust between humans and machines, and the design of decision aids. International Journal of Man-Machine Studies, 27, 5, 527--539.
[19]
Wang, N., Pynadath, D. V. and Hill, S. G. 2016. Trust Calibration within a Human-Robot Team: Comparing Automatically Generated Explanations. The 11th ACM/IEEE International Conference on Human Robot Interaction (HRI '16), 109--116.
[20]
Lohani, M., Stokes, C., McCoy, M., Bailey, C. A. and Rivers, S. E. 2016. Social Interaction Moderates Human-Robot Trust-Reliance Relationship and Improves Stress Coping. The 11th ACM/IEEE International Conference on Human Robot Interaction (HRI '16), 471--472.
[21]
Bartlett, C. E. and Cooke, N. J. 2015. Human-Robot Teaming in Urban Search and Rescue. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 59, 1, 250--254.
[22]
Sanchez, J. 2006. Factors that affect trust and reliance on an automated aid. Georgia Insititue of Technology.
[23]
Desai, M., Medvedev, M., Vázquez, M., McSheehy, S., Gadea-Omelchenko, S., Bruggeman, C., Steinfeld, A. and Yanco, H. 2012. Effects of changing reliability on trust of robot systems. The 7th annual ACM/IEEE international conference on Human-Robot Interaction (HRI '12), 73--80.
[24]
Mercado, J. E., Rupp, M. A., Chen, J. Y. C., Barnes, M. J., Barber, D. and Procci, K. 2016. Intelligent Agent Transparency in Human-Agent Teaming for Multi-UxV Management. Human Factors, 58, 3, 401--415.
[25]
Sorkin, R., Kantowitz, B. H. and Kantowitz, S. C. Likelihood alarm displays. 1988 Human Factors, 30, 4, 445--459.
[26]
Wickens, C. D., Levinthal, B. and Rice, S. 2010. Imperfect reliability in ummaned air vehicle supervison and control: Human-Robot Interactions in Future Military Operations. Ashgate Publishing, 193--210
[27]
Tanner, W. P. J. and Swets, J. A. 1954. A decision-making theory of visual detection. Psychological Review, 61, 6, 401--409.
[28]
Wiczorek, R. and Manzey, D. 2014 Supporting attention allocation in multitask environments: Effects of likelihood alarm systems on trust, behavior and performance. Human Factors, 56, 7, 1209--1221.
[29]
Dixon, S., Wickens, C. D. and McCarley, J. M. 2007 On the independence of reliance and compliance: Are false alarms worse than misses' Human Factors, 49, 4, 564--572.
[30]
Ebbinghaus, H. 1913. Memory: A Contribution to Experimental Psychology. Columbia University, New York City.
[31]
Madhavan, P., & Wiegmann, D. A. 2007. Effects of information source, pedigree, and reliability on operator interaction with decision support systems. Human Factors, 49, 5, 773--785
[32]
de Vries, P., & Midden, C. 2008. Effect of indirect information on system trust and control allocation. Behaviour & Information Technology, 27, 1, 17--29.

Cited By

View all
  • (2024)Trust Development and Explainability: A Longitudinal Study with a Personalized Assistive SystemMultimodal Technologies and Interaction10.3390/mti80300208:3(20)Online publication date: 1-Mar-2024
  • (2024)The Impact of Transparency on Human-Autonomy TeamingProceedings of the Human Factors and Ergonomics Society Annual Meeting10.1177/10711813241277519Online publication date: 30-Aug-2024
  • (2024)Evaluating the Influence of Incorrect Reassurances on Trust in Imperfect Automated Decision AidProceedings of the Human Factors and Ergonomics Society Annual Meeting10.1177/10711813241264498Online publication date: 12-Aug-2024
  • Show More Cited By

Index Terms

  1. Evaluating Effects of User Experience and System Transparency on Trust in Automation

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      HRI '17: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction
      March 2017
      510 pages
      ISBN:9781450343367
      DOI:10.1145/2909824
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 06 March 2017

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. automation transparency
      2. long-term inter-actions
      3. supervisory control
      4. trust in automation

      Qualifiers

      • Research-article

      Conference

      HRI '17
      Sponsor:

      Acceptance Rates

      HRI '17 Paper Acceptance Rate 51 of 211 submissions, 24%;
      Overall Acceptance Rate 268 of 1,124 submissions, 24%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)409
      • Downloads (Last 6 weeks)29
      Reflects downloads up to 30 Aug 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Trust Development and Explainability: A Longitudinal Study with a Personalized Assistive SystemMultimodal Technologies and Interaction10.3390/mti80300208:3(20)Online publication date: 1-Mar-2024
      • (2024)The Impact of Transparency on Human-Autonomy TeamingProceedings of the Human Factors and Ergonomics Society Annual Meeting10.1177/10711813241277519Online publication date: 30-Aug-2024
      • (2024)Evaluating the Influence of Incorrect Reassurances on Trust in Imperfect Automated Decision AidProceedings of the Human Factors and Ergonomics Society Annual Meeting10.1177/10711813241264498Online publication date: 12-Aug-2024
      • (2024)Robot-Assisted Decision-Making: Unveiling the Role of Uncertainty Visualisation and EmbodimentProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642911(1-16)Online publication date: 11-May-2024
      • (2024)Interactively Explaining Robot Policies to Humans in Integrated Virtual and Physical Training EnvironmentsCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640656(847-851)Online publication date: 11-Mar-2024
      • (2024)Towards Balancing Preference and Performance through Adaptive Personalized ExplainabilityProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3635000(658-668)Online publication date: 11-Mar-2024
      • (2024)Power in Human-Robot InteractionProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634949(269-282)Online publication date: 11-Mar-2024
      • (2024)Evaluating the Impact of Personalized Value Alignment in Human-Robot Interaction: Insights into Trust and Team Performance OutcomesProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634921(32-41)Online publication date: 11-Mar-2024
      • (2024)Interactive Output Modalities Design for Enhancement of User Trust Experience in Highly Autonomous DrivingInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2375697(1-19)Online publication date: 10-Jul-2024
      • (2024)May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network ExplainabilityInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2364986(1-25)Online publication date: 8-Aug-2024
      • Show More Cited By

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media