Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3491102.3502104acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Understanding the impact of explanations on advice-taking: a user study for AI-based clinical Decision Support Systems

Published: 29 April 2022 Publication History

Abstract

The field of eXplainable Artificial Intelligence (XAI) focuses on providing explanations for AI systems’ decisions. XAI applications to AI-based Clinical Decision Support Systems (DSS) should increase trust in the DSS by allowing clinicians to investigate the reasons behind its suggestions. In this paper, we present the results of a user study on the impact of advice from a clinical DSS on healthcare providers’ judgment in two different cases: the case where the clinical DSS explains its suggestion and the case it does not. We examined the weight of advice, the behavioral intention to use the system, and the perceptions with quantitative and qualitative measures. Our results indicate a more significant impact of advice when an explanation for the DSS decision is provided. Additionally, through the open-ended questions, we provide some insights on how to improve the explanations in the diagnosis forecasts for healthcare assistants, nurses, and doctors.

Supplemental Material

MP4 File
Talk Video
Transcript for: Talk Video

References

[1]
European Commission 2018. EU General Data Protection Regulation. European Commission. https://ec.europa.eu/commission/sites/beta-political/files/data-protection-factsheet-changes_en.pdf
[2]
Barbara D Adams, Lora E Bruyn, Sébastien Houde, Paul Angelopoulos, Kim Iwasa-Madge, and Carol McCann. 2003. Trust in automated systems. Ministry of National Defence(2003).
[3]
Anna Markella Antoniadi, Yuhan Du, Yasmine Guendouz, Lan Wei, Claudia Mazo, Brett A Becker, and Catherine Mooney. 2021. Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: a systematic review. Applied Sciences 11, 11 (2021), 5088.
[4]
Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, 2019. One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012(2019).
[5]
Alina Jade Barnett, Fides Regina Schwartz, Chaofan Tao, Chaofan Chen, Yinhao Ren, Joseph Y Lo, and Cynthia Rudin. 2021. A case-based interpretable deep learning model for classification of mass lesions in digital mammography. Nature Machine Intelligence(2021), 1–10.
[6]
Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José MF Moura, and Peter Eckersley. 2020. Explainable machine learning in deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 648–657.
[7]
Alan F. Blackwell. 2021. Ethnographic artificial intelligence. Interdisciplinary Science Reviews 46, 1-2 (2021), 198–211. https://doi.org/10.1080/03080188.2020.1840226
[8]
Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, and Salvatore Rinzivillo. 2021. Benchmarking and survey of explanation methods for black box models. arXiv preprint arXiv:2102.13076(2021).
[9]
Clark Borst. 2016. Shared mental models in human-machine systems. IFAC-PapersOnLine 49, 19 (2016), 195–200.
[10]
Andrea Brennen. 2020. What Do People Really Want When They Say They Want” Explainable AI?” We Asked 60 Stakeholders. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1–7.
[11]
Zana Buçinca, Phoebe Lin, Krzysztof Z Gajos, and Elena L Glassman. 2020. Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces. 454–464.
[12]
Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z Gajos. 2021. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1(2021), 1–21.
[13]
Adrian Bussone, Simone Stumpf, and Dympna O’Sullivan. 2015. The role of explanations on trust and reliance in clinical decision support systems. In 2015 International Conference on Healthcare Informatics. IEEE, 160–169.
[14]
John T Cacioppo, Richard E Petty, and Chuan Feng Kao. 1984. The efficient assessment of need for cognition. Journal of personality assessment 48, 3 (1984), 306–307.
[15]
Béatrice Cahour and Jean-François Forzy. 2009. Does projection into use improve trust and exploration? An example with a cruise control system. Safety science 47, 9 (2009), 1260–1270.
[16]
Carrie J Cai, Samantha Winter, David Steiner, Lauren Wilcox, and Michael Terry. 2019. ” Hello AI”: Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. Proceedings of the ACM on Human-computer Interaction 3, CSCW(2019), 1–24.
[17]
Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O’Connell, Terrance Gray, F. Maxwell Harper, and Haiyi Zhu. 2019. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders. Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300789
[18]
Edward Choi, Mohammad Taha Bahadori, Andy Schuetz, Walter F Stewart, and Jimeng Sun. 2016. Doctor ai: Predicting clinical events via recurrent neural networks. In Machine learning for healthcare conference. PMLR, 301–318.
[19]
Edward Choi, Mohammad Taha Bahadori, Le Song, Walter F Stewart, and Jimeng Sun. 2017. GRAM: graph-based attention model for healthcare representation learning. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 787–795.
[20]
Giovanni Comandé. 2020. Unfolding the legal component of trustworthy AI: a must to avoid ethics washing.Version Accepted for Annuario di Diritto Comparato e di Studi Legislativi, forthcoming (2020).
[21]
Ian Covert, Scott Lundberg, and Su-In Lee. 2020. Explaining by removing: A unified framework for model explanation. arXiv preprint arXiv:2011.14878(2020).
[22]
Berkeley Dietvorst and Soaham Bharti. 2019. People Reject Even the Best Possible Algorithm in Uncertain Decision Domains. SSRN Electronic Journal(2019). https://doi.org/10.2139/ssrn.3424158
[23]
Berkeley J Dietvorst and Soaham Bharti. 2020. People reject algorithms in uncertain decision domains because they have diminishing sensitivity to forecasting error. Psychological science 31, 10 (2020), 1302–1314.
[24]
Berkeley J Dietvorst, Joseph P Simmons, and Cade Massey. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err.Journal of Experimental Psychology: General 144, 1 (2015), 114.
[25]
Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608(2017).
[26]
Jinyun Duan, Yue Xu, and Lyn M Van Swol. 2020. Influence of self-concept clarity on advice seeking and utilisation. Asian Journal of Social Psychology(2020).
[27]
Upol Ehsan, Q Vera Liao, Michael Muller, Mark O Riedl, and Justin D Weisz. 2021. Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–19.
[28]
Upol Ehsan and Mark O. Riedl. 2020. Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 12424 LNCS (2020), 449–466. https://doi.org/10.1007/978-3-030-60117-1_33 arxiv:2002.01092
[29]
Malin Eiband, Daniel Buschek, Alexander Kremer, and Heinrich Hussmann. 2019. The impact of placebic explanations on trust in intelligent systems. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. 1–6.
[30]
European Parliament. 2021. Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS. https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206
[31]
Wenjuan Fan, Jingnan Liu, Shuwan Zhu, and Panos M Pardalos. 2018. Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical diagnosis support system (AIMDSS). Annals of Operations Research(2018), 1–26.
[32]
Bhavya Ghai, Q Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus Mueller. 2021. Explainable active learning (xal) toward ai explanations as interfaces for machine teachers. Proceedings of the ACM on Human-Computer Interaction 4, CSCW3(2021), 1–28.
[33]
Francesca Gino and Maurice E Schweitzer. 2008. Take this advice and shove it. In Academy of Management Proceedings, Vol. 2008. Academy of Management Briarcliff Manor, NY 10510, 1–5.
[34]
Kate Goddard, Abdul Roudsari, and Jeremy C Wyatt. 2012. Automation bias: a systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association 19, 1(2012), 121–127.
[35]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Dino Pedreschi, and Fosca Giannotti. 2018. A Survey Of Methods For Explaining Black Box Models. ACM CSUR 51, 5, Article 93 (Aug. 2018), 42 pages.
[36]
David Gunning. 2017. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web 2 (2017).
[37]
Ronan Hamon, Henrik Junklewitz, Gianclaudio Malgieri, Paul De Hert, Laurent Beslay, and Ignacio Sanchez. 2021. Impossible Explanations? Beyond explainable AI in the GDPR from a COVID-19 use case scenario. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 549–559.
[38]
Yukinori Harada, Shinichi Katsukura, Ren Kawamura, and Taro Shimizu. 2021. Effects of a Differential Diagnosis List of Artificial Intelligence on Differential Diagnoses by Physicians: An Exploratory Analysis of Data from a Randomized Controlled Study. International Journal of Environmental Research and Public Health 18, 11(2021), 5562.
[39]
Nigel Harvey and Ilan Fischer. 1997. Taking advice: Accepting help, improving judgment, and sharing responsibility. Organizational behavior and human decision processes 70, 2 (1997), 117–133.
[40]
Steven D Hillson, Donald P Connelly, and Yuanli Liu. 1995. The effects of computer-assisted electrocardiographic interpretation on physicians’ diagnostic decisions. Medical Decision Making 15, 2 (1995), 107–112.
[41]
Robert R Hoffman, Shane T Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608(2018).
[42]
Maia Jacobs, Melanie F Pradier, Thomas H McCoy, Roy H Perlis, Finale Doshi-Velez, and Krzysztof Z Gajos. 2021. How machine-learning recommendations influence clinician treatment selections: the example of antidepressant selection. Translational psychiatry 11, 1 (2021), 1–9.
[43]
C Krittanawong. 2018. The rise of artificial intelligence and the uncertain future for physicians. European journal of internal medicine 48 (2018), e13–e14.
[44]
Himabindu Lakkaraju and Osbert Bastani. 2020. ”How Do I Fool You?”: Manipulating User Trust via Misleading Black Box Explanations(AIES ’20). Association for Computing Machinery, New York, NY, USA, 79–85. https://doi.org/10.1145/3375627.3375833
[45]
Jean-Baptiste Lamy, Boomadevi Sekar, Gilles Guezennec, Jacques Bouaud, and Brigitte Séroussi. 2019. Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach. Artificial intelligence in medicine 94 (2019), 42–53.
[46]
John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human factors 46, 1 (2004), 50–80.
[47]
Ariel Levy, Monica Agrawal, Arvind Satyanarayan, and David Sontag. 2021. Assessing the Impact of Automated Suggestions on Decision Making: Domain Experts Mediate Model Errors but Take Less Initiative. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–13.
[48]
Q Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–15.
[49]
Thomas Lindow, Josefine Kron, Hans Thulesius, Erik Ljungström, and Olle Pahlm. 2019. Erroneous computer-based interpretations of atrial fibrillation and atrial flutter in a Swedish primary health care setting. Scandinavian journal of primary health care 37, 4 (2019), 426–433.
[50]
Jennifer M Logg, Julia A Minson, and Don A Moore. 2019. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151 (2019), 90–103.
[51]
Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Proceedings of the 31st international conference on neural information processing systems. 4768–4777.
[52]
Gianclaudio Malgieri and Giovanni Comandé. 2017. Why a right to legibility of automated decision-making exists in the general data protection regulation. International Data Privacy Law(2017).
[53]
Vidushi Marda and Shivangi Narayan. 2021. On the importance of ethnographic methods in AI research. Nature Machine Intelligence 3, 3 (2021), 187–189.
[54]
Carlo Metta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, and Salvatore Rinzivillo. 2021. Exemplars and Counterexemplars Explanations for Image Classifiers, Targeting Skin Lesion Labeling. In 2021 IEEE Symposium on Computers and Communications (ISCC). IEEE, 1–7.
[55]
Martijn Millecamp, Sidra Naveed, Katrien Verbert, and Jürgen Ziegler. [n.d.]. To Explain or Not to Explain: the Effects of Personal Characteristics When Explaining Feature-based Recommendations in Different Domains. Technical Report.
[56]
Tim Miller, Piers Howe, and Liz Sonenberg. 2017. Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences. arXiv preprint arXiv:1712.00547(2017).
[57]
Christoph Molnar, Giuseppe Casalicchio, and Bernd Bischl. 2020. Interpretable machine learning–a brief history, state-of-the-art and challenges. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 417–431.
[58]
Jessica Morley, Caio CV Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo, and Luciano Floridi. 2020. The ethics of AI in health care: A mapping review. Social Science & Medicine(2020), 113172.
[59]
Henrik Mucha, Sebastian Robert, Ruediger Breitschwerdt, and Michael Fellmann. 2021. Interfaces for Explanations in Human-AI Interaction: Proposing a Design Evaluation Approach. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–6.
[60]
W James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. 2019. Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences 116, 44(2019), 22071–22080.
[61]
Emanuele Neri, Francesca Coppola, Vittorio Miele, Corrado Bibbolino, and Roberto Grassi. 2020. Artificial intelligence: Who is responsible for the diagnosis?
[62]
Mahsan Nourani, Joanie King, and Eric Ragan. 2020. The role of domain expertise in user trust and the impact of first impressions with intelligent systems. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 8. 112–121.
[63]
Cecilia Panigutti, Riccardo Guidotti, Anna Monreale, and Dino Pedreschi. 2019. Explaining multi-label black-box classifiers for health applications. In International Workshop on Health Intelligence. Springer, 97–110.
[64]
Cecilia Panigutti, Anna Monreale, Giovanni Comandé, and Dino Pedreschi. 2022. Ethical, societal and legal issues in deep learning for healthcare. In Deep Learning in Biology and Medicine. World Scientific Publishing.
[65]
Cecilia Panigutti, Alan Perotti, André Panisson, Paolo Bajardi, and Dino Pedreschi. 2021. FairLens: Auditing black-box clinical decision support systems. Information Processing & Management 58, 5 (2021), 102657.
[66]
Cecilia Panigutti, Alan Perotti, and Dino Pedreschi. 2020. Doctor XAI: an ontology-based approach to black-box sequential data classification explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 629–639.
[67]
Michael G Pratt. 2009. From the editors: For the lack of a boilerplate: Tips on writing up (and reviewing) qualitative research.
[68]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ” Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
[69]
Philipp Schmidt and Felix Biessmann. 2020. Calibrating human-ai collaboration: Impact of risk, ambiguity and transparency on algorithmic bias. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction. Springer, 431–449.
[70]
Jessica M Schwartz, Amanda J Moy, Sarah C Rossetti, Noémie Elhadad, and Kenrick D Cato. 2021. Clinician involvement in research on machine learning–based predictive clinical decision support for the hospital setting: A scoping review. Journal of the American Medical Informatics Association 28, 3(2021), 653–663.
[71]
Lucy Shinners, Christina Aggar, Sandra Grace, and Stuart Smith. 2020. Exploring healthcare professionals’ understanding and experiences of artificial intelligence technology use in the delivery of healthcare: an integrative review. Health informatics journal 26, 2 (2020), 1225–1236.
[72]
Linda J Skitka, Kathleen L Mosier, and Mark Burdick. 1999. Does automation bias decision-making?International Journal of Human-Computer Studies 51, 5 (1999), 991–1006.
[73]
Janet A Sniezek and Timothy Buckley. 1995. Cueing and cognitive conflict in judge-advisor decision making. Organizational behavior and human decision processes 62, 2 (1995), 159–174.
[74]
Janet A Sniezek and Lyn M Van Swol. 2001. Trust, confidence, and expertise in a judge-advisor system. Organizational behavior and human decision processes 84, 2 (2001), 288–307.
[75]
MT Spil and WR Schuring. 2006. E-Health Systems Diffusion and Use: The Innovation. The Users and the Use IT Model(2006).
[76]
Lea Strohm, Charisma Hehakaya, Erik R Ranschaert, Wouter PC Boon, and Ellen HM Moors. 2020. Implementation of artificial intelligence (AI) applications in radiology: hindering and facilitating factors. European radiology 30(2020), 5525–5532.
[77]
Sana Tonekaboni, Shalmali Joshi, Melissa D McCradden, and Anna Goldenberg. 2019. What clinicians want: contextualizing explainable machine learning for clinical end use. In Machine Learning for Healthcare Conference. PMLR, 359–380.
[78]
Eric Topol. 2019. Deep medicine: how artificial intelligence can make healthcare human again. Hachette UK.
[79]
Viswanath Venkatesh and Hillol Bala. 2008. Technology acceptance model 3 and a research agenda on interventions. Decision sciences 39, 2 (2008), 273–315.
[80]
Viswanath Venkatesh, Michael G Morris, Gordon B Davis, and Fred D Davis. 2003. User acceptance of information technology: Toward a unified view. MIS quarterly (2003), 425–478.
[81]
Himanshu Verma, Roger Schaer, Julien Reichenbach, Jreige Mario, John O Prior, Florian Evéquoz, and Adrien Raphaël Depeursinge. 2021. On Improving Physicians’ Trust in AI: Qualitative Inquiry with Imaging Experts in the Oncological Domain. (2021).
[82]
Xinru Wang and Ming Yin. 2021. Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. In 26th International Conference on Intelligent User Interfaces (College Station, TX, USA) (IUI ’21). Association for Computing Machinery, 318–328.
[83]
Yao Xie, Melody Chen, David Kao, Ge Gao, and Xiang’Anthony’ Chen. 2020. CheXplain: Enabling Physicians to Explore and Understand Data-Driven, AI-Enabled Medical Imaging Analysis. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
[84]
Kun Yu, Shlomo Berkovsky, Dan Conway, Ronnie Taib, Jianlong Zhou, and Fang Chen. 2016. Trust and reliance based on system accuracy. In Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization. 223–227.
[85]
Muhan Zhang, Christopher R King, Michael Avidan, and Yixin Chen. 2020. Hierarchical Attention Propagation for Healthcare Representation Learning. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 249–256.
[86]
Yunfeng Zhang, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 295–305.

Cited By

View all
  • (2025)Human-AI collaboration is not very collaborative yet: a taxonomy of interaction patterns in AI-assisted decision making from a systematic reviewFrontiers in Computer Science10.3389/fcomp.2024.15210666Online publication date: 6-Jan-2025
  • (2025)ContractMind: Trust-calibration interaction design for AI contract review toolsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103411196(103411)Online publication date: Feb-2025
  • (2025)The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPRArtificial Intelligence and Law10.1007/s10506-024-09430-wOnline publication date: 13-Jan-2025
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
April 2022
10459 pages
ISBN:9781450391573
DOI:10.1145/3491102
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 April 2022

Check for updates

Badges

  • Honorable Mention

Author Tags

  1. Advice-taking
  2. Behavioral intention
  3. Clinical Decision Support System
  4. HCI
  5. Trust
  6. User Study
  7. XAI
  8. eXplainable AI

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

CHI '22
Sponsor:
CHI '22: CHI Conference on Human Factors in Computing Systems
April 29 - May 5, 2022
LA, New Orleans, USA

Acceptance Rates

Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,792
  • Downloads (Last 6 weeks)161
Reflects downloads up to 30 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Human-AI collaboration is not very collaborative yet: a taxonomy of interaction patterns in AI-assisted decision making from a systematic reviewFrontiers in Computer Science10.3389/fcomp.2024.15210666Online publication date: 6-Jan-2025
  • (2025)ContractMind: Trust-calibration interaction design for AI contract review toolsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103411196(103411)Online publication date: Feb-2025
  • (2025)The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPRArtificial Intelligence and Law10.1007/s10506-024-09430-wOnline publication date: 13-Jan-2025
  • (2024)AI for Decision Support: Balancing Accuracy, Transparency, and Trust Across SectorsInformation10.3390/info1511072515:11(725)Online publication date: 11-Nov-2024
  • (2024)Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial IntelligenceBioengineering10.3390/bioengineering1104036911:4(369)Online publication date: 12-Apr-2024
  • (2024)Humans in XAI: increased reliance in decision-making under uncertainty by using explanation strategiesFrontiers in Behavioral Economics10.3389/frbhe.2024.13770753Online publication date: 8-Mar-2024
  • (2024)How large language model-powered conversational agents influence decision making in domestic medical triage contextsFrontiers in Computer Science10.3389/fcomp.2024.14274636Online publication date: 18-Oct-2024
  • (2024)Prioritising Trust: Podiatrists' Preference for AI in Supportive Over Diagnostic Roles in Healthcare: A Qualitative Analysis (Preprint)JMIR Human Factors10.2196/59010Online publication date: 30-Mar-2024
  • (2024)Predictors of Health Care Practitioners’ Intention to Use AI-Enabled Clinical Decision Support Systems: Meta-Analysis Based on the Unified Theory of Acceptance and Use of TechnologyJournal of Medical Internet Research10.2196/5722426(e57224)Online publication date: 5-Aug-2024
  • (2024)Exploring the Effects of User Input and Decision Criteria Control on Trust in a Decision Support Tool for Spare Parts Inventory ManagementProceedings of the International Conference on Mobile and Ubiquitous Multimedia10.1145/3701571.3701585(313-323)Online publication date: 1-Dec-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media