Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations

Published: 04 October 2023 Publication History

Abstract

AI explanations are often mentioned as a way to improve human-AI decision-making, but empirical studies have not found consistent evidence of explanations' effectiveness and, on the contrary, suggest that they can increase overreliance when the AI system is wrong. While many factors may affect reliance on AI support, one important factor is how decision-makers reconcile their own intuition---beliefs or heuristics, based on prior knowledge, experience, or pattern recognition, used to make judgments---with the information provided by the AI system to determine when to override AI predictions. We conduct a think-aloud, mixed-methods study with two explanation types (feature- and example-based) for two prediction tasks to explore how decision-makers' intuition affects their use of AI predictions and explanations, and ultimately their choice of when to rely on AI. Our results identify three types of intuition involved in reasoning about AI predictions and explanations: intuition about the task outcome, features, and AI limitations. Building on these, we summarize three observed pathways for decision-makers to apply their own intuition and override AI predictions. We use these pathways to explain why (1) the feature-based explanations we used did not improve participants' decision outcomes and increased their overreliance on AI, and (2) the example-based explanations we used improved decision-makers' performance over feature-based explanations and helped achieve complementary human-AI performance. Overall, our work identifies directions for further development of AI decision-support systems and explanation methods that help decision-makers effectively apply their intuition to achieve appropriate reliance on AI.

References

[1]
Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE access, Vol. 6 (2018), 52138--52160.
[2]
Alejandro Barredo Arrieta, Natalia D'iaz-Rodr'iguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garc'ia, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, Vol. 58 (2020), 82--115.
[3]
Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S Lasecki, Daniel S Weld, and Eric Horvitz. 2019. Beyond accuracy: The role of mental models in human-AI team performance. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7. 2--11.
[4]
Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of AI explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--16.
[5]
Mohsen Bayati, Mark Braverman, Michael Gillam, Karen M Mack, George Ruiz, Mark S Smith, and Eric Horvitz. 2014. Data-driven decisions for reducing readmissions for heart failure: General methodology and case study. PloS one, Vol. 9, 10 (2014), e109264.
[6]
Emma Beede, Elizabeth Baylor, Fred Hersch, Anna Iurchenko, Lauren Wilcox, Paisan Ruamviboonsuk, and Laura M Vardoulakis. 2020. A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1--12.
[7]
Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Melancc on, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, et al. 2021. Uncertainty as a form of transparency: Measuring, communicating, and using uncertainty. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 401--413.
[8]
Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. 'It's Reducing a Human Being to a Percentage' Perceptions of Justice in Algorithmic Decisions. In Proceedings of the 2018 Chi conference on human factors in computing systems. 1--14.
[9]
Glenn W Brier et al. 1950. Verification of forecasts expressed in terms of probability. Monthly weather review, Vol. 78, 1 (1950), 1--3.
[10]
Zana Buccinca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-Assisted Decision-Making. Proc. ACM Hum.-Comput. Interact., Vol. 5, CSCW1, Article 188 (apr 2021), 21 pages. https://doi.org/10.1145/3449287
[11]
Zana Bucc inca, Phoebe Lin, Krzysztof Z Gajos, and Elena L Glassman. 2020. Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems. In Proceedings of the 25th international conference on intelligent user interfaces. 454--464.
[12]
Adrian Bussone, Simone Stumpf, and Dympna O'Sullivan. 2015. The role of explanations on trust and reliance in clinical decision support systems. In 2015 international conference on healthcare informatics. IEEE, 160--169.
[13]
Carrie J Cai, Jonas Jongejan, and Jess Holbrook. 2019a. The effects of example-based explanations in a machine learning interface. In Proceedings of the 24th international conference on intelligent user interfaces. 258--262.
[14]
Carrie J Cai, Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, Fernanda Viegas, Greg S Corrado, Martin C Stumpe, et al. 2019b. Human-centered tools for coping with imperfect algorithms during medical decision-making. In Proceedings of the 2019 chi conference on human factors in computing systems. 1--14.
[15]
Shiye Cao and Chien-Ming Huang. 2022. Understanding User Reliance on AI in Assisted Decision-Making. Proceedings of the ACM on Human-Computer Interaction, Vol. 6, CSCW2 (2022), 1--23.
[16]
Samuel Carton, Qiaozhu Mei, and Paul Resnick. 2020. Feature-Based Explanations Don't Help People Detect Misclassifications of Online Toxicity. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 14. 95--106.
[17]
Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. 2015. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 1721--1730.
[18]
Diogo V Carvalho, Eduardo M Pereira, and Jaime S Cardoso. 2019. Machine learning interpretability: A survey on methods and metrics. Electronics, Vol. 8, 8 (2019), 832.
[19]
Chacha Chen, Shi Feng, Amit Sharma, and Chenhao Tan. 2023. Machine Explanations and Human Understanding. Transactions on Machine Learning Research (2023). https://openreview.net/forum?id=y4CGF1A8VG
[20]
Serena Chen and Shelly Chaiken. 1999. The heuristic-systematic model in its broader context. (1999).
[21]
Valerie Chen, Jeffrey Li, Joon Sik Kim, Gregory Plumb, and Ameet Talwalkar. 2022. Interpretable machine learning: Moving from mythos to diagnostics. Queue, Vol. 19, 6 (2022), 28--56.
[22]
Lingwei Cheng and Alexandra Chouldechova. 2022. Heterogeneity in Algorithm-Assisted Decision-Making: A Case Study in Child Abuse Hotline Screening. arXiv preprint arXiv:2204.05478 (2022).
[23]
Maria De-Arteaga, Riccardo Fogliato, and Alexandra Chouldechova. 2020. A case for humans-in-the-loop: Decisions in the presence of erroneous algorithmic scores. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--12.
[24]
Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In proceedings of the Conference on Fairness, Accountability, and Transparency. 120--128.
[25]
Frances Ding, Moritz Hardt, John Miller, and Ludwig Schmidt. 2021. Retiring Adult: New Datasets for Fair Machine Learning. Advances in Neural Information Processing Systems, Vol. 34 (2021).
[26]
Jonathan Dodge, Q Vera Liao, Yunfeng Zhang, Rachel KE Bellamy, and Casey Dugan. 2019. Explaining models: An empirical study of how explanations impact fairness judgment. In Proceedings of the 24th international conference on intelligent user interfaces. 275--285.
[27]
Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017).
[28]
Upol Ehsan, Samir Passi, Q Vera Liao, Larry Chan, I Lee, Michael Muller, Mark O Riedl, et al. 2021. The who in explainable AI: How AI background shapes perceptions of AI explanations. arXiv preprint arXiv:2107.13509 (2021).
[29]
Upol Ehsan and Mark O Riedl. 2020. Human-centered explainable AI: Towards a reflective sociotechnical approach. In International Conference on Human-Computer Interaction. Springer, 449--466.
[30]
Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, and Mark O Riedl. 2019. Automated rationale generation: A technique for explainable AI and its effects on human perceptions. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 263--274.
[31]
Alexander Erlei, Franck Nekdem, Lukas Meub, Avishek Anand, and Ujwal Gadiraju. 2020. Impact of algorithmic decision making on human behavior: Evidence from ultimatum bargaining. In Proceedings of the AAAI conference on human computation and crowdsourcing, Vol. 8. 43--52.
[32]
Krzysztof Z Gajos and Lena Mamykina. 2022. Do People Engage Cognitively with AI? Impact of AI Assistance on Incidental Learning. In 27th International Conference on Intelligent User Interfaces. 794--806.
[33]
Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus Mueller. 2021. Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers. Proc. ACM Hum.-Comput. Interact., Vol. 4, CSCW3, Article 235 (jan 2021), 28 pages. https://doi.org/10.1145/3432934
[34]
Gerd Gigerenzer. 2007. Gut feelings: The intelligence of the unconscious. Penguin.
[35]
Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE, 80--89.
[36]
Ben Green and Yiling Chen. 2019a. Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments. In Proceedings of the conference on fairness, accountability, and transparency. 90--99.
[37]
Ben Green and Yiling Chen. 2019b. The principles and limits of algorithm-in-the-loop decision making. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW (2019), 1--24.
[38]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR), Vol. 51, 5 (2018), 1--42.
[39]
Sophia Hadash, Martijn C Willemsen, Chris Snijders, and Wijnand A IJsselsteijn. 2022. Improving understandability of feature contributions in model-agnostic explainable AI tools. In CHI Conference on Human Factors in Computing Systems. 1--9.
[40]
Katherine H Hall. 2002. Reviewing intuitive decision-making and uncertainty: The implications for medical education. Medical education, Vol. 36, 3 (2002), 216--224.
[41]
Peter Hase and Mohit Bansal. 2020. Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 5540--5552. https://doi.org/10.18653/v1/2020.acl-main.491
[42]
Gaole He, Lucie Kuiper, and Ujwal Gadiraju. 2023. Knowing About Knowing: An Illusion of Human Competence Can Hinder Appropriate Reliance on AI Systems. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI '23). Association for Computing Machinery, New York, NY, USA, Article 113, 18 pages. https://doi.org/10.1145/3544548.3581025
[43]
Robert R Hoffman, Shane T Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608 (2018).
[44]
Kori Inkpen, Shreya Chappidi, Keri Mallari, Besmira Nushi, Divya Ramesh, Pietro Michelucci, Vani Mandava, LibuvSe Hannah VepvRek, and Gabrielle Quinn. 2023. Advancing Human-AI Complementarity: The Impact of User Expertise and Algorithmic Tuning on Joint Decision Making. ACM Trans. Comput.-Hum. Interact. (mar 2023). https://doi.org/10.1145/3534561 Just Accepted.
[45]
Maia Jacobs, Jeffrey He, Melanie F. Pradier, Barbara Lam, Andrew C Ahn, Thomas H McCoy, Roy H Perlis, Finale Doshi-Velez, and Krzysztof Z Gajos. 2021. Designing AI for trust and collaboration in time-constrained medical decisions: A sociotechnical lens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--14.
[46]
Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning. In Proceedings of the 2020 CHI conference on human factors in computing systems.
[47]
Anna Kawakami, Venkatesh Sivaraman, Hao-Fei Cheng, Logan Stapleton, Yanghuidi Cheng, Diana Qing, Adam Perer, Zhiwei Steven Wu, Haiyi Zhu, and Kenneth Holstein. 2022. Improving Human-AI Partnerships in Child Welfare: Understanding Worker Practices, Challenges, and Desires for Algorithmic Decision Support. In CHI Conference on Human Factors in Computing Systems. 1--18.
[48]
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. 2018. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning. PMLR, 2668--2677.
[49]
Sunnie SY Kim, Nicole Meister, Vikram V Ramaswamy, Ruth Fong, and Olga Russakovsky. 2022. Hive: Evaluating the human interpretability of visual explanations. In European Conference on Computer Vision. Springer, 280--298.
[50]
Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In International conference on machine learning. PMLR, 1885--1894.
[51]
Vivian Lai, Chacha Chen, Q Vera Liao, Alison Smith-Renner, and Chenhao Tan. 2021. Towards a science of human-AI decision making: A survey of empirical studies. arXiv preprint arXiv:2112.11471 (2021).
[52]
Vivian Lai, Han Liu, and Chenhao Tan. 2020. " Why is' Chicago'deceptive?" Towards Building Model-Driven Tutorials for Humans. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--13.
[53]
Vivian Lai and Chenhao Tan. 2019. On human predictions with explanations and predictions of machine learning models: A case study on deception detection. In Proceedings of the conference on fairness, accountability, and transparency. 29--38.
[54]
Q Vera Liao and S Shyam Sundar. 2022. Designing for Responsible Trust in AI Systems: A Communication Perspective. Proceedings of the 2022 Conference on Fairness, Accountability, and Transparency (2022).
[55]
Q Vera Liao and Kush R Varshney. 2021. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. arXiv preprint arXiv:2110.10790 (2021).
[56]
Q Vera Liao, Yunfeng Zhang, Ronny Luss, Finale Doshi-Velez, and Amit Dhurandhar. 2022. Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 10. 147--159.
[57]
Zachary C Lipton. 2018. The Mythos of Model Interpretability. Commun. ACM, Vol. 61, 10 (2018), 36--43.
[58]
Han Liu, Vivian Lai, and Chenhao Tan. 2021. Understanding the effect of out-of-distribution examples and interactive explanations on human-AI decision making. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW2 (2021), 1--45.
[59]
Zhuoran Lu and Ming Yin. 2021. Human Reliance on Machine Learning Models When Performance Feedback is Limited: Heuristics and Risks. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--16.
[60]
Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems, Vol. 30 (2017).
[61]
Scott M Lundberg, Bala Nair, Monica S Vavilala, Mayumi Horibe, Michael J Eisses, Trevor Adams, David E Liston, Daniel King-Wai Low, Shu-Fang Newman, Jerry Kim, et al. 2018. Explainable machine-learning predictions for the prevention of hypoxaemia during surgery. Nature biomedical engineering, Vol. 2, 10 (2018), 749--760.
[62]
Andreas Madsen, Siva Reddy, and Sarath Chandar. 2022. Post-hoc interpretability for neural NLP: A survey. Comput. Surveys, Vol. 55, 8 (2022), 1--42.
[63]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, Vol. 267 (2019), 1--38.
[64]
Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. 2015. Obtaining well calibrated probabilities using bayesian binning. In Proceedings of the AAAI conference on artificial intelligence, Vol. 29.
[65]
Hyanghee Park, Daehwan Ahn, Kartik Hosanagar, and Joonhwan Lee. 2021. Human-AI Interaction in Human Resource Management: Understanding Why Employees Resist Algorithmic Evaluation at Workplaces and How to Mitigate Burdens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--15.
[66]
Richard E Petty and Pablo Bri nol. 2011. The elaboration likelihood model. Handbook of theories of social psychology, Vol. 1 (2011), 224--245.
[67]
Martin Potanvc ok. 2019. Role of data and intuition in decision making processes. Journal of Systems Integration, Vol. 10, 3 (2019), 31--34.
[68]
Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1--52.
[69]
Amy Rechkemmer and Ming Yin. 2022. When Confidence Meets Accuracy: Exploring the Effects of Multiple Performance Indicators on Trust in Machine Learning Models. In CHI Conference on Human Factors in Computing Systems. 1--14.
[70]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135--1144.
[71]
Allan D Rosenblatt and James T Thickstun. 1994. Intuition and consciousness. The Psychoanalytic Quarterly, Vol. 63, 4 (1994), 696--714.
[72]
Eduardo Salas, Michael A Rosen, and Deborah DiazGranados. 2010. Expertise-based intuition and decision making in organizations. Journal of management, Vol. 36, 4 (2010), 941--973.
[73]
Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision. 618--626.
[74]
Debbie A Shirley and Janice Langan-Fox. 1996. Intuition: A review of the literature. Psychological reports, Vol. 79, 2 (1996), 563--584.
[75]
Simone Stumpf, Vidya Rajaram, Lida Li, Weng-Keen Wong, Margaret Burnett, Thomas Dietterich, Erin Sullivan, and Jonathan Herlocker. 2009. Interacting meaningfully with machine learning systems: Three experiments. International journal of human-computer studies, Vol. 67, 8 (2009), 639--662.
[76]
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In International conference on machine learning. PMLR, 3319--3328.
[77]
Harini Suresh, Steven R Gomez, Kevin K Nam, and Arvind Satyanarayan. 2021. Beyond expertise and roles: A framework to characterize the stakeholders of interpretable machine learning and their needs. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--16.
[78]
Helena Vasconcelos, Matthew Jörke, Madeleine Grunde-McLaughlin, Tobias Gerstenberg, Michael Bernstein, and Ranjay Krishna. 2022. Explanations Can Reduce Overreliance on AI Systems During Decision-Making. arXiv preprint arXiv:2212.06823 (2022).
[79]
Diane Walker and Florence Myrick. 2006. Grounded theory: An exploration of process and procedure. Qualitative health research, Vol. 16, 4 (2006), 547--559.
[80]
Dakuo Wang, Liuping Wang, Zhan Zhang, Ding Wang, Haiyi Zhu, Yvonne Gao, Xiangmin Fan, and Feng Tian. 2021. ?Brilliant AI Doctor" in Rural Clinics: Challenges in AI-Powered Clinical Decision Support System Deployment. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--18.
[81]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1--15.
[82]
Xinru Wang and Ming Yin. 2021. Are explanations helpful? a comparative study of the effects of explanations in ai-assisted decision-making. In 26th International Conference on Intelligent User Interfaces. 318--328.
[83]
Jennifer Wortman Vaughan and Hanna Wallach. 2021. A Human-Centered Agenda for Intelligible Machine Learning. In Machines We Trust: Perspectives on Dependable AI, Marcello Pelillo and Teresa Scantamburlo (Eds.). MIT Press.
[84]
Fumeng Yang, Zhuanyi Huang, Jean Scholtz, and Dustin L Arendt. 2020. How do visual explanations foster end users' appropriate trust in machine learning?. In Proceedings of the 25th International Conference on Intelligent User Interfaces. 189--201.
[85]
Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the effect of accuracy on trust in machine learning models. In Proceedings of the 2019 chi conference on human factors in computing systems. 1--12.
[86]
Wencan Zhang and Brian Y Lim. 2022. Towards Relatable Explainable AI with the Perceptual Process. In CHI Conference on Human Factors in Computing Systems. 1--24.
[87]
Yunfeng Zhang, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 295--305.

Cited By

View all
  • (2025)Overlap Number of Balls Model-Agnostic CounterFactuals (ONB-MACF): A data-morphology-based counterfactual generation method for trustworthy artificial intelligenceInformation Sciences10.1016/j.ins.2024.121844701(121844)Online publication date: May-2025
  • (2025)Nullius in Explanans: an ethical risk assessment for explainable AIEthics and Information Technology10.1007/s10676-024-09800-727:1Online publication date: 1-Mar-2025
  • (2025)Integrating Knowledge and Data-Driven Artificial Intelligence for Decisional Enterprise InteroperabilityInnovative Intelligent Industrial Production and Logistics10.1007/978-3-031-80775-6_26(372-398)Online publication date: 14-Feb-2025
  • Show More Cited By

Index Terms

  1. Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Proceedings of the ACM on Human-Computer Interaction
      Proceedings of the ACM on Human-Computer Interaction  Volume 7, Issue CSCW2
      CSCW
      October 2023
      4055 pages
      EISSN:2573-0142
      DOI:10.1145/3626953
      Issue’s Table of Contents
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 04 October 2023
      Published in PACMHCI Volume 7, Issue CSCW2

      Check for updates

      Author Tags

      1. decision support
      2. explainable AI
      3. human-AI interaction
      4. interpretability

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)3,674
      • Downloads (Last 6 weeks)460
      Reflects downloads up to 15 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Overlap Number of Balls Model-Agnostic CounterFactuals (ONB-MACF): A data-morphology-based counterfactual generation method for trustworthy artificial intelligenceInformation Sciences10.1016/j.ins.2024.121844701(121844)Online publication date: May-2025
      • (2025)Nullius in Explanans: an ethical risk assessment for explainable AIEthics and Information Technology10.1007/s10676-024-09800-727:1Online publication date: 1-Mar-2025
      • (2025)Integrating Knowledge and Data-Driven Artificial Intelligence for Decisional Enterprise InteroperabilityInnovative Intelligent Industrial Production and Logistics10.1007/978-3-031-80775-6_26(372-398)Online publication date: 14-Feb-2025
      • (2024)Changes Accompanied by Artificial Intelligence (AI) and How Organizations Respond to These ChangesContemporary Perspectives on Organizational Behaviour [Working Title]10.5772/intechopen.1007615Online publication date: 5-Nov-2024
      • (2024)Predictive Models for Optimal Irrigation Scheduling and Water Management: A Review of AI and ML ApproachesInternational Journal of Management, Technology, and Social Sciences10.47992/IJMTS.2581.6012.0346(94-110)Online publication date: 20-May-2024
      • (2024)Uncertainty in XAI: Human Perception and Modeling ApproachesMachine Learning and Knowledge Extraction10.3390/make60200556:2(1170-1192)Online publication date: 27-May-2024
      • (2024)Multi-physiological signal fusion for objective emotion recognition in educational human–computer interactionFrontiers in Public Health10.3389/fpubh.2024.149237512Online publication date: 26-Nov-2024
      • (2024)TRANSFORMATIVE PEDAGOGY IN THE DIGITAL AGE: UNRAVELING THE IMPACT OF ARTIFICIAL INTELLIGENCE ON HIGHER EDUCATION STUDENTSProblems of Education in the 21st Century10.33225/pec/24.82.63082:5(630-657)Online publication date: 12-Oct-2024
      • (2024)Sequential binary classification of lithofacies from well-log data and their uncertainty quantificationInterpretation10.1190/INT-2024-0019.112:4(T573-T584)Online publication date: 30-Oct-2024
      • (2024)MmECare: Enabling Fine-grained Vital Sign Monitoring for Emergency Care with Handheld MmWave RadarsProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997668:4(1-24)Online publication date: 21-Nov-2024
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Full Access

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media