Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3301275.3302277acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article

Do I trust my machine teammate?: an investigation from perception to decision

Published: 17 March 2019 Publication History

Abstract

In the human-machine collaboration context, understanding the reason behind each human decision is critical for interpreting the performance of the human-machine team. Via an experimental study of a system with varied levels of accuracy, we describe how human trust interplays with system performance, human perception and decisions. It is revealed that humans are able to perceive the performance of automatic systems and themselves, and adjust their trust levels according to the accuracy of systems. The 70% system accuracy suggests to be a threshold between increasing and decreasing human trust and system usage. We have also shown that trust can be derived from a series of users' decisions rather than from a single one, and relates to the perceptions of users. A general framework depicting how trust and perception affect human decision making is proposed, which can be used as future guidelines for human-machine collaboration design.

References

[1]
J. D. Lee & N. Moray (1994). Trust, self-confidence, and operators' adaptation to automation. International journal of human-computer studies, 40(1), 153--184.
[2]
M. T. Ribeiro, S. Singh & C. Guestrin (2016, August). Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135--1144). ACM.
[3]
E. Pronin (2007). Perception and misperception of bias in human judgment. Trends in cognitive sciences, 11(1), 37--43.
[4]
D. D. Woods, L. J. Johannesen, R. I. Cook, & N. B. Sarter (1994). Behind human error: Cognitive systems, computers and hindsight (No. CSERIAC-SOAR-94-01). DAYTON UNIV RESEARCH INST (URDI) OH.
[5]
B. J. Dietvorst, J. P. Simmons, & C. Massey (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114.
[6]
J. D. Lee & N. Moray (1992). Trust, control strategies and allocation of function in human-machine systems. Ergonomics, 35(10), 1243--1270.
[7]
B. M. Muir (1994). Trust in automation: Part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics, 37(11), 1905--1922.
[8]
B. M. Muir & N. Moray (1996). Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics, 39(3), 429--460.
[9]
J. D. Lee & K. A. See (2002). Trust in computer technology and the implications for design and evaluation. Etiquette for Human-Computer Work: Technical Report FS-02--02, 20--25.
[10]
N. Moray, T. Inagaki & M. Itoh (2000). Adaptive automation, trust, and self-confidence in fault management of time-critical tasks. Journal of experimental psychology: Applied, 6(1), 44.
[11]
P. Madhavan & D. A. Wiegmann (2007). Similarities and differences between human-human and human-automation trust: an integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277--301.
[12]
S. Sarkar, D. Araiza-Illan & K. Eder (2017). Effects of Faults, Experience, and Personality on Trust in a Robot Co-Worker. arXiv preprint arXiv:1703.02335.
[13]
M. T. Khasawneh, S. R. Bowling, X. Jiang, A. K. Gramopadhye & B. J. Melloy (2003). A model for predicting human trust in automated systems. Origins, 5.
[14]
S. M. Merritt & D. R. Ilgen (2008). Not all trust is created equal: Dispositional and history-based trust in human-automation interactions. Human Factors, 50(2), 194--210.
[15]
B. Bernard (1984). The Logic and Limits of Trust. (New Brunswick, NJ: Rutgers University Press, 1983. Pp. 190. $27.50, cloth; $9.95, paper.). American Political Science Review, 78(1), 209--210.
[16]
S. Zuboff (1988). In the age of the smart machine: the future of power and work. New York: Basic.FNM Surname (2018). Article Title. Journal Title, 10(3), 1--10.
[17]
S. Rice & K. Geels (2010). Using system-wide trust theory to make predictions about dependence on four diagnostic aids. The Journal of general psychology, 137(4), 362--375.
[18]
E. Onal, J. Schaffer, J. O'Donovan, L. Marusich, S. Y. Michael, C. Gonzalez, & T. Hollerer. (2014, March). Decision-making in abstract trust games: A user interface perspective. In Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2014 IEEE International Inter-Disciplinary Conference on (pp. 21--27). IEEE.
[19]
D. Holliday, S. Wilson, & S. Stumpf (2016, March). User trust in intelligent systems: A journey over time. In Proceedings of the 21st International Conference on Intelligent User Interfaces (pp. 164--168). ACM.
[20]
J. Sauer, A. Chavaillaz, & D. Wastell. (2016). Experience of automation failures in training: effects on trust, automation bias, complacency and performance. Ergonomics, 59(6), 767--780.
[21]
J. O'DONOVAN & B. Smyth (2006). Mining trust values from recommendation errors. International Journal on Artificial Intelligence Tools, 15(06), 945--962.
[22]
K. Yu, S. Berkovsky, D. Conway, R. Taib, J. Zhou, & F. Chen (2016). Trust and reliance based on system accuracy. In Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization (pp. 223--227). ACM.
[23]
K. Yu, S. Berkovsky, R. Taib, D. Conway, J. Zhou, & F. Chen (2017). User trust dynamics: An investigation driven by differences in system performance. In Proceedings of the 22nd International Conference on Intelligent User Interfaces (pp. 307--317). ACM.
[24]
E. L. Glaeser, D. L. Laibson, J. A. Scheinkman & C. L. Soutter (2000). Measuring trust. The quarterly journal of economics, 115(3), 811--846.
[25]
J. M. McGuirl & N. B. Sarter (2006). Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Human factors, 48(4), 656--665.
[26]
Z. Yan, V. Niemi, Y. Dong & G. Yu (2008, June). A user behavior based trust model for mobile applications. In International Conference on Autonomic and Trusted Computing (pp. 455--469). Springer, Berlin, Heidelberg.
[27]
R. J. Lewicki, D. J. McAllister & R. J. Bies (1998). Trust and distrust: New relationships and realities. Academy of management Review, 23(3), 438--458.
[28]
S. C. Sutherland, C. Harteveld & M. E. Young (2016). Effects of the Advisor and Environment on Requesting and Complying With Automated Advice. ACM Transactions on Interactive Intelligent Systems (TiiS), 6(4), 27.
[29]
J. R. Anderson, D. Bothell, M. D. Byrne, S. Douglass, C. Lebiere, & Y. Qin (2004). An integrated theory of the mind. Psychological review, 111(4), 1036.
[30]
J. D. Johnson, Sanchez, A. D. Fisk, & W. A. Rogers (2004, September). Type of automation failure: The effects on trust and reliance in automation. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 48, No. 18, pp. 2163--2167). Sage CA: Los Angeles, CA: SAGE Publications.
[31]
L. Cosmides & J. Tooby (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition, 58(1), 1--73.
[32]
J. D. Lee & K. A. See (2004). Trust in automation: Designing for appropriate reliance. Human factors, 46(1), 50--80.
[33]
R. Bénabou & J. Tirole (2002). Self-confidence and personal motivation. The Quarterly Journal of Economics, 117(3), 871--915.
[34]
P. Briggs, B. Burford, & C. Dracup (1998). Modelling self-confidence in users of a computer-based system showing unrepresentative design. International Journal of Human-Computer Studies, 49(5), 717--742.

Cited By

View all
  • (2024)Trust in Human-Agent Teams: A Multilevel Perspective and Future Research AgendaOrganizational Psychology Review10.1177/20413866241253278Online publication date: 21-May-2024
  • (2024)Behave Yourself! Behavioral Indicators of Trust in Human-Agent TeamsProceedings of the Human Factors and Ergonomics Society Annual Meeting10.1177/10711813241276485Online publication date: 2-Sep-2024
  • (2024)Does More Advice Help? The Effects of Second Opinions in AI-Assisted Decision MakingProceedings of the ACM on Human-Computer Interaction10.1145/36537088:CSCW1(1-31)Online publication date: 26-Apr-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
IUI '19: Proceedings of the 24th International Conference on Intelligent User Interfaces
March 2019
713 pages
ISBN:9781450362726
DOI:10.1145/3301275
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 March 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. decision making
  2. dynamic process
  3. machine performance
  4. perception
  5. trust

Qualifiers

  • Research-article

Conference

IUI '19
Sponsor:

Acceptance Rates

IUI '19 Paper Acceptance Rate 71 of 282 submissions, 25%;
Overall Acceptance Rate 746 of 2,811 submissions, 27%

Upcoming Conference

IUI '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)179
  • Downloads (Last 6 weeks)35
Reflects downloads up to 26 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Trust in Human-Agent Teams: A Multilevel Perspective and Future Research AgendaOrganizational Psychology Review10.1177/20413866241253278Online publication date: 21-May-2024
  • (2024)Behave Yourself! Behavioral Indicators of Trust in Human-Agent TeamsProceedings of the Human Factors and Ergonomics Society Annual Meeting10.1177/10711813241276485Online publication date: 2-Sep-2024
  • (2024)Does More Advice Help? The Effects of Second Opinions in AI-Assisted Decision MakingProceedings of the ACM on Human-Computer Interaction10.1145/36537088:CSCW1(1-31)Online publication date: 26-Apr-2024
  • (2024)“It would work for me too”: How Online Communities Shape Software Developers’ Trust in AI-Powered Code Generation ToolsACM Transactions on Interactive Intelligent Systems10.1145/365199014:2(1-39)Online publication date: 9-Mar-2024
  • (2024)Enhancing AI-Assisted Group Decision Making through LLM-Powered Devil's AdvocateProceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645199(103-119)Online publication date: 18-Mar-2024
  • (2024)Establishing Appropriate Trust in AI through Transparency and ExplainabilityExtended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3638184(1-6)Online publication date: 11-May-2024
  • (2024)Dealing with Uncertainty: Understanding the Impact of Prognostic Versus Diagnostic Tasks on Trust and Reliance in Human-AI Decision MakingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641905(1-17)Online publication date: 11-May-2024
  • (2024)May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network ExplainabilityInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2364986(1-25)Online publication date: 8-Aug-2024
  • (2024)In search of verifiability: Explanations rarely enable complementary performance in AI‐advised decision makingAI Magazine10.1002/aaai.12182Online publication date: Jul-2024
  • (2023)A Literature Review of Human–AI Synergy in Decision Making: From the Perspective of Affordance Actualization TheorySystems10.3390/systems1109044211:9(442)Online publication date: 25-Aug-2023
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media