Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Explanations Can Reduce Overreliance on AI Systems During Decision-Making

Published: 16 April 2023 Publication History

Abstract

Prior work has identified a resilient phenomenon that threatens the performance of human-AI decision-making teams: overreliance, when people agree with an AI, even when it is incorrect. Surprisingly, overreliance does not reduce when the AI produces explanations for its predictions, compared to only providing predictions. Some have argued that overreliance results from cognitive biases or uncalibrated trust, attributing overreliance to an inevitability of human cognition. By contrast, our paper argues that people strategically choose whether or not to engage with an AI explanation, demonstrating empirically that there are scenarios where AI explanations reduce overreliance. To achieve this, we formalize this strategic choice in a cost-benefit framework, where the costs and benefits of engaging with the task are weighed against the costs and benefits of relying on the AI. We manipulate the costs and benefits in a maze task, where participants collaborate with a simulated AI to find the exit of a maze. Through 5 studies (N = 731), we find that costs such as task difficulty (Study 1), explanation difficulty (Study 2, 3), and benefits such as monetary compensation (Study 4) affect overreliance. Finally, Study 5 adapts the Cognitive Effort Discounting paradigm to quantify the utility of different explanations, providing further support for our framework. Our results suggest that some of the null effects found in literature could be due in part to the explanation not sufficiently reducing the costs of verifying the AI's prediction.

References

[1]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias. In Ethics of Data and Analytics. Auerbach Publications, 254--264.
[2]
Yuichiro Anzai and Herbert A Simon. 1979. The theory of learning by doing. Psychological review 86, 2 (1979), 124.
[3]
Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S Lasecki, Daniel S Weld, and Eric Horvitz. 2019. Beyond accuracy: The role of mental models in human-AI team performance. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7. 2--11.
[4]
Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel S. Weld. 2021. Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance. arXiv:2006.14779 [cs.AI]
[5]
Bernard Barber. 1983. The logic and limits of trust. (1983).
[6]
Mohsen Bayati, Mark Braverman, Michael Gillam, Karen M Mack, George Ruiz, Mark S Smith, and Eric Horvitz. 2014. Data-driven decisions for reducing readmissions for heart failure: General methodology and case study. PloS one 9, 10 (2014), e109264.
[7]
Matthew M Botvinick and Jonathan D Cohen. 2014. The computational and neural basis of cognitive control: Charted territory and new frontiers. Cognitive science 38, 6 (2014), 1249--1285.
[8]
Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, and Elena L. Glassman. 2020. Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (IUI '20). Association for Computing Machinery, New York, NY, USA, 454--464. https://doi.org/10.1145/3377325.3377498
[9]
Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z Gajos. 2021. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1--21.
[10]
Adrian Bussone, Simone Stumpf, and Dympna O'Sullivan. 2015. The role of explanations on trust and reliance in clinical decision support systems. In 2015 international conference on healthcare informatics. IEEE, 160--169.
[11]
Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. 2015. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 1721--1730.
[12]
Arjun Chandrasekaran, Deshraj Yadav, Prithvijit Chattopadhyay, Viraj Prabhu, and Devi Parikh. 2017. It Takes Two to Tango: Towards Theory of AI's Mind. arXiv:1704.00717 [cs.CV]
[13]
Jim X Chen. 2016. The evolution of computing: AlphaGo. Computing in Science & Engineering 18, 4 (2016), 4--7.
[14]
Quanze Chen, Jonathan Bragg, Lydia B Chilton, and Dan S Weld. 2019. Cicero: Multi-turn, contextual argumentation for accurate crowdsourcing. In Proceedings of the 2019 chi conference on human factors in computing systems. 1--14.
[15]
Eric Chu, Deb Roy, and Jacob Andreas. 2020. Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction. arXiv:2007.12248 [cs.LG]
[16]
Pat Croskerry. 2009. Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Advances in health sciences education 14, 1 (2009), 27--35.
[17]
Pat Croskerry. 2009. A universal model of diagnostic reasoning. Academic medicine 84, 8 (2009), 1022--1028.
[18]
Gabriel Lins de Holanda Coelho, Paul H. P. Hanel, and Lukas J. Wolf. 2020. The Very Efficient Assessment of Need for Cognition: Developing a Six-Item Version. Assessment 27, 8 (2020), 1870--1885. https://doi.org/10.1177/1073191118793208 arXiv:https://doi.org/10.1177/1073191118793208 30095000.
[19]
Berkeley J Dietvorst, Joseph P Simmons, and Cade Massey. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General 144, 1 (2015), 114.
[20]
Jeff Druce, James Niehaus, Vanessa Moody, David Jensen, and Michael L. Littman. 2021. Brittle AI, Causal Confusion, and Bad Mental Models: Challenges and Successes in the XAI Program. https://doi.org/10.48550/ARXIV.2106.05506
[21]
Mary T Dzindolet, Linda G Pierce, Hall P Beck, and Lloyd A Dawe. 2002. The perceived utility of human and automated aids in a visual detection task. Human Factors 44, 1 (2002), 79--94.
[22]
Upol Ehsan, Q Vera Liao, Michael Muller, Mark O Riedl, and Justin D Weisz. 2021. Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--19.
[23]
Upol Ehsan and Mark O Riedl. 2020. Human-centered explainable ai: Towards a reflective sociotechnical approach. In International Conference on Human-Computer Interaction. Springer, 449--466.
[24]
Malin Eiband, Daniel Buschek, Alexander Kremer, and Heinrich Hussmann. 2019. The impact of placebic explanations on trust in intelligent systems. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. 1--6.
[25]
Jonathan St BT Evans. 2003. In two minds: dual-process accounts of reasoning. Trends in cognitive sciences 7, 10 (2003), 454--459.
[26]
Shi Feng and Jordan Boyd-Graber. 2019. What can AI do for me: Evaluating Machine Learning Interpretations in Cooperative Play. arXiv:1810.09648 [cs.AI]
[27]
Andrew Gelman, John B Carlin, Hal S Stern, and Donald B Rubin. 1995. Bayesian data analysis. Chapman and Hall/CRC.
[28]
Ana Valeria Gonzalez, Gagan Bansal, Angela Fan, Robin Jia, Yashar Mehdad, and Srinivasan Iyer. 2020. Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA. arXiv:2012.15075 [cs.CL]
[29]
Mitchell L Gordon, Kaitlyn Zhou, Kayur Patel, Tatsunori Hashimoto, and Michael S Bernstein. 2021. The disagreement deconvolution: Bringing machine learning performance metrics in line with reality. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--14.
[30]
Ben Green and Yiling Chen. 2019. The principles and limits of algorithm-in-the-loop decision making. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1--24.
[31]
Chris Guthrie and Andrew J Wistrich. 2007. Blinking on the Bench: How Judges Decide Cases. Cornell Law Review 93 (2007), 1.
[32]
Jeffrey T Hancock, Mor Naaman, and Karen Levy. 2020. AI-mediated communication: definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication 25, 1 (2020), 89--100.
[33]
Peter Hase and Mohit Bansal. 2020. Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior? arXiv preprint arXiv:2005.01831 (2020).
[34]
Carl G. Hempel and Paul Oppenheim. 1948. Studies in the Logic of Explanation. Philosophy of Science 15, 2 (1948), 135--175. http://www.jstor.org/stable/185169
[35]
Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors 57, 3 (2015), 407--434.
[36]
Benjamin D. Horne, Dorit Nevo, John O'Donovan, Jin-Hee Cho, and Sibel Adali. 2019. Rating Reliability and Bias in News Articles: Does AI Assistance Help Everyone? arXiv:1904.01531 [cs.CY]
[37]
Maia Jacobs, Jeffrey He, Melanie F. Pradier, Barbara Lam, Andrew C Ahn, Thomas H McCoy, Roy H Perlis, Finale Doshi-Velez, and Krzysztof Z Gajos. 2021. Designing AI for Trust and Collaboration in Time-Constrained Medical Decisions: A Sociotechnical Lens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--14.
[38]
Maia Jacobs, Melanie F Pradier, Thomas H McCoy, Roy H Perlis, Finale Doshi-Velez, and Krzysztof Z Gajos. 2021. How machine-learning recommendations influence clinician treatment selections: the example of antidepressant selection. Translational psychiatry 11, 1 (2021), 1--9.
[39]
Eric J Johnson and John W Payne. 1985. Effort and accuracy in choice. Management science 31, 4 (1985), 395--414.
[40]
Daniel Kahneman. 2003. A perspective on judgment and choice: mapping bounded rationality. American psychologist 58, 9 (2003), 697.
[41]
Daniel Kahneman. 2011. Thinking, Fast and Slow. Farrar, Straus and Giroux, New York.
[42]
Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3313831.3376219
[43]
Matthew Kay, Gregory L Nelson, and Eric B Hekler. 2016. Researcher-centered design of statistics: Why Bayesian statistics better fit the culture and incentives of HCI. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 4521--4532.
[44]
Pranav Khadpe, Ranjay Krishna, Li Fei-Fei, Jeffrey T Hancock, and Michael S Bernstein. 2020. Conceptual metaphors impact perceptions of human-AI collaboration. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (2020), 1--26.
[45]
Wouter Kool and Matthew Botvinick. 2014. A labor/leisure tradeoff in cognitive control. Journal of Experimental Psychology: General 143, 1 (2014), 131.
[46]
Wouter Kool and Matthew Botvinick. 2018. Mental labour. Nature human behaviour 2, 12 (2018), 899--908.
[47]
Wouter Kool, Samuel J Gershman, and Fiery A Cushman. 2017. Cost-benefit arbitration between multiple reinforcement-learning systems. Psychological science 28, 9 (2017), 1321--1333.
[48]
Wouter Kool, Joseph T McGuire, Zev B Rosen, and Matthew M Botvinick. 2010. Decision making and the avoidance of cognitive demand. Journal of experimental psychology: general 139, 4 (2010), 665.
[49]
Todd Kulesza, Simone Stumpf, Margaret Burnett, Sherry Yang, Irwin Kwan, and Weng-Keen Wong. 2013. Too much, too little, or just right? Ways explanations impact end users' mental models. In 2013 IEEE Symposium on visual languages and human centric computing. IEEE, 3--10.
[50]
Philipp Kulms and Stefan Kopp. 2019. More human-likeness, more trust? The effect of anthropomorphism on self-reported and behavioral trust in continued and interdependent human-agent cooperation. In Proceedings of mensch und computer 2019. 31--42.
[51]
Johannes Kunkel, Tim Donkers, Lisa Michael, Catalin-Mihai Barbu, and Jürgen Ziegler. 2019. Let Me Explain: Impact of Personal and Impersonal Explanations on Trust in Recommender Systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--12. https://doi.org/10.1145/3290605.3300717
[52]
Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Sam Gershman, and Finale Doshi-Velez. 2019. An Evaluation of the Human-Interpretability of Explanation. https://doi.org/10.48550/ARXIV.1902.00006
[53]
Vivian Lai, Han Liu, and Chenhao Tan. 2020. "Why is 'Chicago' Deceptive?" Towards Building Model-Driven Tutorials for Humans. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--13. https://doi.org/10.1145/3313831.3376873
[54]
Vivian Lai and Chenhao Tan. 2019. On Human Predictions with Explanations and Predictions of Machine Learning Models. Proceedings of the Conference on Fairness, Accountability, and Transparency (Jan 2019). https://doi.org/10.1145/3287560.3287590
[55]
Vivian Lai and Chenhao Tan. 2019. On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* '19). Association for Computing Machinery, New York, NY, USA, 29--38. https://doi.org/10.1145/3287560.3287590
[56]
Kathryn Ann Lambe, Gary O'Reilly, Brendan D Kelly, and Sarah Curristan. 2016. Dual-process cognitive interventions to enhance diagnostic reasoning: a systematic review. BMJ quality & safety 25, 10 (2016), 808--820.
[57]
Ellen J Langer, Arthur Blank, and Benzion Chanowitz. 1978. The mindlessness of ostensibly thoughtful action: The role of" placebic" information in interpersonal interaction. Journal of personality and social psychology 36, 6 (1978), 635.
[58]
James H. Lebovic. 2014. National Security Through a Cockeyed Lens: How Cognitive Bias Impacts U.S. Foreign Policy by Steve A. Yetiv. Baltimore, MD, Johns Hopkins University Press, 2013. 168 pp. $24.95. Political Science Quarterly 129, 3 (2014), 534--536. https://doi.org/10.1002/polq.12235 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/polq.12235
[59]
John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human factors 46, 1 (2004), 50--80.
[60]
NY Louis Lee, Geoffrey P Goodwin, and Philip N Johnson-Laird. 2008. The psychological puzzle of Sudoku. Thinking & Reasoning 14, 4 (2008), 342--364.
[61]
Russell V. Lenth. 2021. emmeans: Estimated Marginal Means, aka Least-Squares Means. https://CRAN.R-project.org/package=emmeans R package version 1.6.0.
[62]
Tania Lombrozo. 2006. The structure and function of explanations. Trends in cognitive sciences 10, 10 (2006), 464--470.
[63]
Ewa Luger and Abigail Sellen. 2016. " Like Having a Really Bad PA" The Gulf between User Expectation and Experience of Conversational Agents. In Proceedings of the 2016 CHI conference on human factors in computing systems. 5286--5297.
[64]
Scott Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. https://doi.org/10.48550/ARXIV.1705.07874
[65]
Scott M Lundberg, Bala Nair, Monica S Vavilala, Mayumi Horibe, Michael J Eisses, Trevor Adams, David E Liston, Daniel King-Wai Low, Shu-Fang Newman, Jerry Kim, et al. 2018. Explainable machine-learning predictions for the prevention of hypoxaemia during surgery. Nature biomedical engineering 2, 10 (2018), 749--760.
[66]
David R Mandel and Philip E Tetlock. 2018. Correcting judgment correctives in national security intelligence. Frontiers in Psychology 9 (2018), 2640.
[67]
Michael Scott McClendon et al. 2001. The complexity and difficulty of a maze. In Bridges: Mathematical Connections in Art, Music, and Science. Citeseer, 213--222.
[68]
Joseph T McGuire and Matthew M Botvinick. 2010. Prefrontal cortex, cognitive control, and the registration of decision costs. Proceedings of the national academy of sciences 107, 17 (2010), 7922--7926.
[69]
Tim Miller. 2018. Explanation in Artificial Intelligence: Insights from the Social Sciences. arXiv:1706.07269 [cs.AI]
[70]
David Navon and Daniel Gopher. 1977. On the Economy of the Human Processing System: A Model of Multiple Capacity. Technical Report. TECHNION-ISRAEL INST OF TECH HAIFA FACULTY OF INDUSTRIAL AND MANAGEMENT . . . .
[71]
Dong Nguyen. 2018. Comparing Automatic and Human Evaluation of Local Explanations for Text Classification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, New Orleans, Louisiana, 1069--1078. https://doi.org/10.18653/v1/N18--1097
[72]
Yasunobu Nohara, Koutarou Matsumoto, Hidehisa Soejima, and Naoki Nakashima. 2019. Explanation of machine learning models using improved Shapley Additive Explanation. In Proceedings of the 10th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics. 546--546.
[73]
Alonso Ortega and Gorka Navarrete. 2017. Bayesian Hypothesis Testing: An Alternative to Null Hypothesis Significance Testing (NHST) in Psychology and Social Sciences. In Bayesian Inference, Javier Prieto Tejedor (Ed.). IntechOpen, Rijeka, Chapter 12. https://doi.org/10.5772/intechopen.70230
[74]
Joon Sung Park, Rick Barber, Alex Kirlik, and Karrie Karahalios. 2019. A Slow Algorithm Improves Users' Assessments of the Algorithm's Accuracy. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1--15.
[75]
John W Payne, James R Bettman, and Eric J Johnson. 1988. Adaptive strategy selection in decision making. Journal of experimental psychology: Learning, Memory, and Cognition 14, 3 (1988), 534.
[76]
Joseph C Pitt. 1988. Theories of explanation. (1988).
[77]
Vlad L Pop, Alex Shrewsbury, and Francis T Durso. 2015. Individual differences in the calibration of trust in automation. Human factors 57, 4 (2015), 545--556.
[78]
Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and Measuring Model Interpretability. arXiv:1802.07810 [cs.AI]
[79]
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000 Questions for Machine Comprehension of Text. arXiv:1606.05250 [cs.CL]
[80]
Dasha A Sandra and A Ross Otto. 2018. Cognitive capacity limitations and Need for Cognition differentially predict reward-induced cognitive effort expenditure. Cognition 172 (2018), 101--106.
[81]
Amitai Shenhav, Sebastian Musslick, Falk Lieder, Wouter Kool, Thomas L Griffiths, Jonathan D Cohen, and Matthew M Botvinick. 2017. Toward a rational and mechanistic account of mental effort. Annual review of neuroscience 40 (2017), 99--124.
[82]
Paul J. Silvia and Todd B. Kashdan. 2009. Interesting Things and Curious People: Exploration and Engagement as Transient States and Enduring Strengths. Social and Personality Psychology Compass 3, 5 (2009), 785--797. https://doi.org/10.1111/j.1751--9004.2009.00210.x arXiv:https://compass.onlinelibrary.wiley.com/doi/pdf/10.1111/j.1751--9004.2009.00210.x
[83]
Herbert A Simon. 1956. Rational choice and the structure of the environment. Psychological review 63, 2 (1956), 129.
[84]
Peter Todd and Izak Benbasat. 1994. The influence of decision aids on choice strategies: an experimental analysis of the role of cognitive effort. Organizational behavior and human decision processes 60, 1 (1994), 36--74.
[85]
Michael Toomim, Travis Kriplean, Claus Pörtner, and James Landay. 2011. Utility of human-computer interactions: Toward a science of preference measurement. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2275--2284.
[86]
Amos Tversky and Daniel Kahneman. 1974. Judgment under uncertainty: Heuristics and biases. science 185, 4157 (1974), 1124--1131.
[87]
Amos Tversky and Daniel Kahneman. 1992. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty 5, 4 (1992), 297--323.
[88]
Jasper van der Waa, Elisabeth Nieuwburg, Anita Cremers, and Mark Neerincx. 2021. Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence 291 (2021), 103404.
[89]
Milica Vasiljevic, Mario Weick, Peter Taylor-Gooby, Dominic Abrams, and Tim Hopthrow. 2013. Reasoning about extreme events: A review of behavioural biases in relation to catastrophe risks. (2013).
[90]
Valerie M Vaughn, Vineet Chopra, and Joel D Howell. 2016. War games and diagnostic errors. BMJ: British Medical Journal 355 (2016).
[91]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--15. https://doi.org/10.1145/3290605.3300831
[92]
Xinru Wang and Ming Yin. 2021. Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. In 26th International Conference on Intelligent User Interfaces (College Station, TX, USA) (IUI '21). Association for Computing Machinery, New York, NY, USA, 318--328. https://doi.org/10.1145/3397481.3450650
[93]
Andrew Westbrook and Todd S Braver. 2015. Cognitive effort: A neuroeconomic approach. Cognitive, Affective, & Behavioral Neuroscience 15, 2 (2015), 395--415.
[94]
Andrew Westbrook, Daria Kester, and Todd S Braver. 2013. What is the subjective cost of cognitive effort? Load, trait, and aging effects revealed by economic preference. PloS one 8, 7 (2013), e68210.
[95]
Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the effect of accuracy on trust in machine learning models. In Proceedings of the 2019 chi conference on human factors in computing systems. 1--12.
[96]
Kun Yu, Shlomo Berkovsky, Ronnie Taib, Jianlong Zhou, and Fang Chen. 2019. Do I Trust My Machine Teammate? An Investigation from Perception to Decision. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI '19). Association for Computing Machinery, New York, NY, USA, 460--468. https://doi.org/10.1145/3301275.3302277
[97]
Yunfeng Zhang, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* '20). Association for Computing Machinery, New York, NY, USA, 295--305. https://doi.org/10.1145/3351095.3372852
[98]
Shoshana Zuboff. 1988. In the age of the smart machine. (1988).

Cited By

View all
  • (2024)Balancing Act: Exploring the Interplay Between Human Judgment and Artificial Intelligence in Problem-solving, Creativity, and Decision-makingIgMin Research10.61927/igmin1582:3(145-158)Online publication date: 25-Mar-2024
  • (2024)Advancements and Applications of Artificial Intelligence in Pharmaceutical Sciences: A Comprehensive ReviewIranian Journal of Pharmaceutical Research10.5812/ijpr-15051023:1Online publication date: 15-Oct-2024
  • (2024)Advanced AI Applications for Drug DiscoveryAdvances in Computational Intelligence for the Healthcare Industry 4.010.4018/979-8-3693-2333-5.ch003(42-86)Online publication date: 7-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 7, Issue CSCW1
CSCW
April 2023
3836 pages
EISSN:2573-0142
DOI:10.1145/3593053
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 April 2023
Published in PACMHCI Volume 7, Issue CSCW1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. cost-benefit analysis
  2. decision-making
  3. explainable AI
  4. human-AI collaboration

Qualifiers

  • Research-article

Funding Sources

  • Snap Inc.

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,602
  • Downloads (Last 6 weeks)243
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Balancing Act: Exploring the Interplay Between Human Judgment and Artificial Intelligence in Problem-solving, Creativity, and Decision-makingIgMin Research10.61927/igmin1582:3(145-158)Online publication date: 25-Mar-2024
  • (2024)Advancements and Applications of Artificial Intelligence in Pharmaceutical Sciences: A Comprehensive ReviewIranian Journal of Pharmaceutical Research10.5812/ijpr-15051023:1Online publication date: 15-Oct-2024
  • (2024)Advanced AI Applications for Drug DiscoveryAdvances in Computational Intelligence for the Healthcare Industry 4.010.4018/979-8-3693-2333-5.ch003(42-86)Online publication date: 7-Jun-2024
  • (2024)Beyond Recommendations: From Backward to Forward AI Support of Pilots' Decision-Making ProcessProceedings of the ACM on Human-Computer Interaction10.1145/36870248:CSCW2(1-32)Online publication date: 8-Nov-2024
  • (2024)The Algorithm and the Org Chart: How Algorithms Can Conflict with Organizational StructuresProceedings of the ACM on Human-Computer Interaction10.1145/36869038:CSCW2(1-31)Online publication date: 8-Nov-2024
  • (2024)Towards Understanding Human-AI Reliance Patterns Through Explanation StylesCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678996(861-865)Online publication date: 5-Oct-2024
  • (2024)Understanding the User Perception and Experience of Interactive Algorithmic Recourse CustomizationACM Transactions on Computer-Human Interaction10.1145/3674503Online publication date: 28-Jun-2024
  • (2024)You Can Only Verify When You Know the Answer: Feature-Based Explanations Reduce Overreliance on AI for Easy Decisions, but Not for Hard OnesProceedings of Mensch und Computer 202410.1145/3670653.3670660(156-170)Online publication date: 1-Sep-2024
  • (2024)Who Validates the Validators? Aligning LLM-Assisted Evaluation of LLM Outputs with Human PreferencesProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676450(1-14)Online publication date: 13-Oct-2024
  • (2024)VIME: Visual Interactive Model Explorer for Identifying Capabilities and Limitations of Machine Learning Models for Sequential Decision-MakingProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676323(1-21)Online publication date: 13-Oct-2024
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media