Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3485447.3512240acmconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article
Open access

Will You Accept the AI Recommendation? Predicting Human Behavior in AI-Assisted Decision Making

Published: 25 April 2022 Publication History

Abstract

Internet users make numerous decisions online on a daily basis. With the rapid advances in AI recently, AI-assisted decision making—in which an AI model provides decision recommendations and confidence, while the humans make the final decisions—has emerged as a new paradigm of human-AI collaboration. In this paper, we aim at obtaining a quantitative understanding of whether and when would human decision makers adopt the AI model’s recommendations. We define a space of human behavior models by decomposing the human decision maker’s cognitive process in each decision-making task into two components: the utility component (i.e., evaluate the utility of different actions) and the selection component (i.e., select an action to take), and we perform a systematic search in the model space to identify the model that fits real-world human behavior data the best. Our results highlight that in AI-assisted decision making, human decision makers’ utility evaluation and action selection are influenced by their own judgement and confidence on the decision-making task. Further, human decision makers exhibit a tendency to distort the decision confidence in utility evaluations. Finally, we also analyze the differences in humans’ adoption behavior of AI recommendations as the stakes of the decisions vary.

References

[1]
OA Adeogun, AM Ajana, OA Ayinla, MT Yarhere, and MO Adeogun. 2008. Application of logit model in adoption decision: A study of hybrid clarias in Lagos State, Nigeria. American-Eurasian Journal of Agriculture and Environmental Sciences 4, 4(2008), 468–472.
[2]
Amos Azaria, Ya’akov Gal, Sarit Kraus, and Claudia V Goldman. 2016. Strategic advice provision in repeated human-agent interactions. Autonomous Agents and Multi-Agent Systems 30, 1 (2016), 4–29.
[3]
Gagan Bansal, Besmira Nushi, Ece Kamar, Eric Horvitz, and Daniel S Weld. 2021. Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork. (2021).
[4]
Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S Lasecki, Daniel S Weld, and Eric Horvitz. 2019. Beyond accuracy: The role of mental models in human-AI team performance. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7. 2–11.
[5]
Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel S Weld, Walter S Lasecki, and Eric Horvitz. 2019. Updates in human-ai teams: Understanding and addressing the performance/compatibility tradeoff. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 2429–2437.
[6]
Jonathan Baron, Barbara A Mellers, Philip E Tetlock, Eric Stone, and Lyle H Ungar. 2014. Two reasons to make aggregated probability forecasts more extreme. Decision Analysis 11, 2 (2014), 133–145.
[7]
David V Budescu and Hsiu-Ting Yu. 2006. To Bayes or not to Bayes? A comparison of two classes of models of information aggregation. Decision analysis 3, 3 (2006), 145–162.
[8]
Chun-Wei Chiang and Ming Yin. 2021. You’d Better Stop! Understanding Human Reliance on Machine Learning Models under Covariate Shift. In 13th ACM Web Science Conference 2021. 120–129.
[9]
Abir De, Paramita Koley, Niloy Ganguly, and Manuel Gomez-Rodriguez. 2020. Regression under human assistance. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 2611–2620.
[10]
Diansheng Dong, Chanjin Chung, and Harry M Kaiser. 2004. Modelling milk purchasing behaviour with a panel data double-hurdle model. Applied Economics 36, 8 (2004), 769–779.
[11]
Sebastian Ebert and Philipp Strack. 2015. Until the bitter end: on prospect theory in a dynamic context. American Economic Review 105, 4 (2015), 1618–33.
[12]
Drew Fudenberg, Jon Kleinberg, Annie Liang, and Sendhil Mullainathan. 2019. Measuring the completeness of theories. (2019).
[13]
Ruijiang Gao, Maytal Saar-Tsechansky, Maria De-Arteaga, Ligong Han, Min Kyung Lee, and Matthew Lease. 2021. Human-AI Collaboration with Bandit Feedback. arXiv preprint arXiv:2105.10614(2021).
[14]
Yaohui Guo and X Jessie Yang. 2020. Modeling and Predicting Trust Dynamics in Human–Robot Teaming: A Bayesian Inference Approach. International Journal of Social Robotics(2020), 1–11.
[15]
Yaohui Guo, Chongjie Zhang, and X Jessie Yang. 2020. Modeling Trust Dynamics in Human-robot Teaming: A Bayesian Inference Approach. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1–7.
[16]
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531(2015).
[17]
Wan-Lin Hu, Kumar Akash, Neera Jain, and Tahira Reid. 2016. Real-time sensing of trust in human-machine interactions. IFAC-PapersOnLine 49, 32 (2016), 48–53.
[18]
Been Kim, Rajiv Khanna, and Oluwasanmi O Koyejo. 2016. Examples are not enough, learn to criticize! criticism for interpretability. Advances in neural information processing systems 29 (2016).
[19]
Isaac Lage, Andrew Slavin Ross, Been Kim, Samuel J Gershman, and Finale Doshi-Velez. 2018. Human-in-the-loop interpretability prior. Advances in neural information processing systems 31 (2018).
[20]
Vivian Lai and Chenhao Tan. 2019. On human predictions with explanations and predictions of machine learning models: A case study on deception detection. In Proceedings of the conference on fairness, accountability, and transparency. 29–38.
[21]
Pamela K Lattimore, Joanna R Baker, and Ann D Witte. 1992. The influence of probability on risky choice: A parametric examination. Journal of economic behavior & organization 17, 3(1992), 377–400.
[22]
John Lee and Neville Moray. 1992. Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35, 10 (1992), 1243–1270.
[23]
Han Liu, Vivian Lai, and Chenhao Tan. 2021. Understanding the effect of out-of-distribution examples and interactive explanations on human-ai decision making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2(2021), 1–45.
[24]
Siqi Liu, J Benjamin Miller, and Alexandros Psomas. 2019. Risk Robust Mechanism Design for a Prospect Theoretic Buyer. In International Symposium on Algorithmic Game Theory. Springer, 95–108.
[25]
Yidu Lu. 2020. Detecting and overcoming trust miscalibration in real time using an eye-tracking based technique. Ph. D. Dissertation.
[26]
Zhuoran Lu and Ming Yin. 2021. Human Reliance on Machine Learning Models When Performance Feedback is Limited: Heuristics and Risks. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
[27]
Daniel McFadden. 2001. Economic choices. American economic review 91, 3 (2001), 351–378.
[28]
Robert Mislavsky and Celia Gaertig. 2020. Combining Probability Forecasts: 60% and 60% Is 60%, but Likely and Likely is Very Likely. Forthcoming at Management Science, Johns Hopkins Carey Business School Research Paper20-14(2020).
[29]
Gali Noti, Effi Levi, Yoav Kolumbus, and Amit Daniely. 2016. Behavior-based machine-learning: A hybrid approach for predicting human decision making. arXiv preprint arXiv:1611.10228(2016).
[30]
Joshua C Peterson, David D Bourgin, Mayank Agrawal, Daniel Reichman, and Thomas L Griffiths. 2021. Using large-scale experiments and machine learning to discover theories of human decision-making. Science 372, 6547 (2021), 1209–1214.
[31]
Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–52.
[32]
Drazen Prelec. 1998. The probability weighting function. Econometrica (1998), 497–527.
[33]
John Quiggin. 2012. Generalized expected utility theory: The rank-dependent model. Springer Science & Business Media.
[34]
Maithra Raghu, Katy Blumer, Greg Corrado, Jon Kleinberg, Ziad Obermeyer, and Sendhil Mullainathan. 2019. The algorithmic automation problem: Prediction, triage, and human effort. arXiv preprint arXiv:1903.12220(2019).
[35]
Amy Rechkemmer and Ming Yin. 2022. When Confidence Meets Accuracy: Exploring the Effects of Multiple Performance Indicators on Trust in Machine Learning Models. In Proceedings of the 2022 chi conference on human factors in computing systems.
[36]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ” Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
[37]
Kenneth E Train. 2009. Discrete choice methods with simulation. Cambridge university press.
[38]
Amos Tversky and Craig R Fox. 1995. Weighing risk and uncertainty.Psychological review 102, 2 (1995), 269.
[39]
Amos Tversky and Daniel Kahneman. 1992. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty 5, 4 (1992), 297–323.
[40]
John Von Neumann and Oskar Morgenstern. 1947. Theory of games and economic behavior, 2nd rev. (1947).
[41]
Xinru Wang and Ming Yin. 2021. Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. In 26th International Conference on Intelligent User Interfaces. 318–328.
[42]
Anqi Xu and Gregory Dudek. 2015. Optimo: Online probabilistic trust inference model for asymmetric human-robot collaborations. IEEE, 221–228.
[43]
Yash. 2020. Lending club 2007-2020q3 | Kaggle. https://www.kaggle.com/ethon0426/lending-club-20072020q1?select=Loan_status_2007-2020Q3.gzip
[44]
Ming Yin and Yu-An Sun. 2015. Human behavior models for virtual agents in repeated decision making under uncertainty. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems. 581–589.
[45]
Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the effect of accuracy on trust in machine learning models. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–12.
[46]
Bianca Zadrozny and Charles Elkan. 2001. Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. In Icml, Vol. 1. Citeseer, 609–616.
[47]
Yunfeng Zhang, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 295–305.

Cited By

View all
  • (2024)AI on Loss in Decision-Making and Its Associations With Digital Disorder, Socio-Demographics, and Physical Health Outcomes in IranExploring Youth Studies in the Age of AI10.4018/979-8-3693-3350-1.ch014(254-265)Online publication date: 14-Jun-2024
  • (2024)How do humans learn about the reliability of automation?Cognitive Research: Principles and Implications10.1186/s41235-024-00533-19:1Online publication date: 16-Feb-2024
  • (2024)Enhancing AI-Assisted Group Decision Making through LLM-Powered Devil's AdvocateProceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645199(103-119)Online publication date: 18-Mar-2024
  • Show More Cited By

Index Terms

  1. Will You Accept the AI Recommendation? Predicting Human Behavior in AI-Assisted Decision Making
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image ACM Conferences
          WWW '22: Proceedings of the ACM Web Conference 2022
          April 2022
          3764 pages
          ISBN:9781450390965
          DOI:10.1145/3485447
          This work is licensed under a Creative Commons Attribution International 4.0 License.

          Sponsors

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 25 April 2022

          Check for updates

          Author Tags

          1. AI-Assisted Human Decision Making
          2. Behavior Model
          3. Human-Subject Experiments

          Qualifiers

          • Research-article
          • Research
          • Refereed limited

          Funding Sources

          Conference

          WWW '22
          Sponsor:
          WWW '22: The ACM Web Conference 2022
          April 25 - 29, 2022
          Virtual Event, Lyon, France

          Acceptance Rates

          Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)1,370
          • Downloads (Last 6 weeks)104
          Reflects downloads up to 30 Aug 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)AI on Loss in Decision-Making and Its Associations With Digital Disorder, Socio-Demographics, and Physical Health Outcomes in IranExploring Youth Studies in the Age of AI10.4018/979-8-3693-3350-1.ch014(254-265)Online publication date: 14-Jun-2024
          • (2024)How do humans learn about the reliability of automation?Cognitive Research: Principles and Implications10.1186/s41235-024-00533-19:1Online publication date: 16-Feb-2024
          • (2024)Enhancing AI-Assisted Group Decision Making through LLM-Powered Devil's AdvocateProceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645199(103-119)Online publication date: 18-Mar-2024
          • (2023)Strategic adversarial attacks in AI-assisted decision making to reduce human trust and relianceProceedings of the Thirty-Second International Joint Conference on Artificial Intelligence10.24963/ijcai.2023/337(3020-3028)Online publication date: 19-Aug-2023
          • (2023)Three Challenges for AI-Assisted Decision-MakingPerspectives on Psychological Science10.1177/17456916231181102Online publication date: 13-Jul-2023
          • (2023)AI Trust: Can Explainable AI Enhance Warranted Trust?Human Behavior and Emerging Technologies10.1155/2023/46376782023(1-12)Online publication date: 31-Oct-2023
          • (2023)Know What Not To Know: Users’ Perception of Abstaining ClassifiersCompanion Publication of the 2023 ACM Designing Interactive Systems Conference10.1145/3563703.3596622(169-172)Online publication date: 10-Jul-2023
          • (2023)Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-MakingProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581058(1-19)Online publication date: 19-Apr-2023
          • (2023)Are Two Heads Better Than One in AI-Assisted Decision Making? Comparing the Behavior and Performance of Groups and Individuals in Human-AI Collaborative Recidivism Risk AssessmentProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581015(1-18)Online publication date: 19-Apr-2023
          • (2023)Finding its Voice: The Influence of Robot Voice on Fit, Social Attributes, and Willingness to Use Among Older Adults in the U.S. and Japan2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)10.1109/RO-MAN57019.2023.10309390(2072-2079)Online publication date: 28-Aug-2023
          • Show More Cited By

          View Options

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format.

          HTML Format

          Get Access

          Login options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media