Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Evaluating the Impact of Human Explanation Strategies on Human-AI Visual Decision-Making

Published: 16 April 2023 Publication History
  • Get Citation Alerts
  • Abstract

    Artificial intelligence (AI) is increasingly being deployed in high-stakes domains, such as disaster relief and radiology, to aid practitioners during the decision-making process. Explainable AI techniques have been developed and deployed to provide users insights into why the AI made certain predictions. However, recent research suggests that these techniques may confuse or mislead users. We conducted a series of two studies to uncover strategies that humans use to explain decisions and then understand how those explanation strategies impact visual decision-making. In our first study, we elicit explanations from humans when assessing and localizing damaged buildings after natural disasters from satellite imagery and identify four core explanation strategies that humans employed. We then follow up by studying the impact of these explanation strategies by framing the explanations from Study 1 as if they were generated by AI and showing them to a different set of decision-makers performing the same task. We provide initial insights on how causal explanation strategies improve humans' accuracy and calibrate humans' reliance on AI when the AI is incorrect. However, we also find that causal explanation strategies may lead to incorrect rationalizations when AI presents a correct assessment with incorrect localization. We explore the implications of our findings for the design of human-centered explainable AI and address directions for future work.

    References

    [1]
    2022. CrowdAI. https://www.crowdai.com/
    [2]
    2022. MapSwipe. https://mapswipe.org/en/project.html?projectId=-MhK2rqJYEKpSGMy7nVs
    [3]
    Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 6 (2018), 52138--52160. https://doi.org/10.1109/ACCESS.2018.2870052
    [4]
    Josh Andres, Christine T. Wolf, Sergio Cabrero Barros, Erick Oduor, Rahul Nair, Alexander Kjærum, Anders Bech Tharsgaard, and Bo Schwartz Madsen. 2020. Scenario-based XAI for Humanitarian Aid Forecasting. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, 1--8. https://doi.org/10.1145/3334480.3382903
    [5]
    Joseph L Austerweil and Thomas L Griffiths. 2011. Seeking confirmation is rational for deterministic hypotheses. Cognitive Science 35, 3 (2011), 499--526. https://doi.org/10.1111/j.1551--6709.2010.01161.x
    [6]
    Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance. Association for Computing Machinery, 1--16. https://doi.org/10.1145/3411764.3445717
    [7]
    Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis. Qualitative research in sport, exercise and health 11, 4 (2019), 589--597. https://doi.org/10.1080/2159676x.2019.1628806
    [8]
    Andrea Brennen. 2020. What Do People Really Want When They Say They Want Explainable AI?" We Asked 60 Stakeholders. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA '20). Association for Computing Machinery, 1--7. https://doi.org/10.1145/3334480.3383047
    [9]
    Ángel Alexander Cabrera, Abraham J Druck, Jason I Hong, and Adam Perer. 2021. Discovering and Validating AI Errors With Crowdsourced Failure Reports. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1--22. https://doi.org/10.1145/3479569
    [10]
    Carrie J Cai, Jonas Jongejan, and Jess Holbrook. 2019. The effects of example-based explanations in a machine learning interface. In Proceedings of the 24th international conference on intelligent user interfaces. 258--262. https://doi.org/10.1145/3301275.3302289
    [11]
    Carrie J Cai, Samantha Winter, David Steiner, Lauren Wilcox, and Michael Terry. 2019. "Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. Proceedings of the ACM on Human-computer Interaction 3, CSCW (Nov 2019), 1--24. https://doi.org/10.1145/3359206
    [12]
    David S Channin, Pattanasak Mongkolwat, Vladimir Kleper, and Daniel L Rubin. 2009. The annotation and image mark-up project. https://doi.org/10.1148/radiol.2533090135
    [13]
    Eric Chu, Deb Roy, and Jacob Andreas. 2020. Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction. (Jul 2020). https://arxiv.org/abs/2007.12248v1 arXiv: 2007.12248v1.
    [14]
    Maria De-Arteaga, Riccardo Fogliato, and Alexandra Chouldechova. 2020. A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores. (Feb 2020). https://doi.org/10.1145/3313831.3376638
    [15]
    Abhirup Dikshit and Biswajeet Pradhan. 2021. Interpretable and explainable AI (XAI) model for spatial drought prediction. Science of The Total Environment 801 (Dec 2021), 149797. https://doi.org/10.1016/j.scitotenv.2021.149797
    [16]
    Upol Ehsan, Brent Harrison, Larry Chan, and Mark O. Riedl. 2018. Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES '18). ACM, 81--87. https://doi.org/10.1145/3278721.3278736
    [17]
    Upol Ehsan and Mark O. Riedl. 2020. Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach. In HCI International 2020 - Late Breaking Papers: Multimodality and Intelligence (Lecture Notes in Computer Science), Constantine Stephanidis, Masaaki Kurosu, Helmut Degen, and Lauren Reinerman-Jones (Eds.). Springer International Publishing, 449--466. https://doi.org/10.1007/978--3-030--60117--1_33
    [18]
    Upol Ehsan, Philipp Wintersberger, Q. Vera Liao, Martina Mara, Marc Streit, Sandra Wachter, Andreas Riener, and Mark O. Riedl. 2021. Operationalizing Human-Centered Perspectives in Explainable AI. Association for Computing Machinery, 1--6. https://doi.org/10.1145/3411763.3441342
    [19]
    Ziba Gandomkar and Claudia Mello-Thoms. 2019. Visual search in breast imaging. The British journal of radiology 92, 1102 (2019), 20190057. https://doi.org/10.1259/bjr.20190057
    [20]
    Yash Goyal, Ziyan Wu, Jan Ernst, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Counterfactual Visual Explanations. arXiv:1904.07451 [cs, stat] (Jun 2019). http://arxiv.org/abs/1904.07451 arXiv: 1904.07451.
    [21]
    Ritwik Gupta, Bryce Goodman, Nirav Patel, Ricky Hosfelt, Sandra Sajeev, Eric Heim, Jigar Doshi, Keane Lucas, Howie Choset, and Matthew Gaston. 2019. Creating xBD: A dataset for assessing building damage from satellite imagery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 10--17. https://doi.org/10.1184/R1/8135576.v1
    [22]
    Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, and Trevor Darrell. 2016. Generating Visual Explanations. In Computer Vision -- ECCV 2016 (Lecture Notes in Computer Science), Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (Eds.). Springer International Publishing, 3--19. https://doi.org/10.1007/978--3--319--46493-0_1
    [23]
    Ralph Hertwig and Gerd Gigerenzer. 1999. The ?conjunction fallacy' revisited: How intelligent inferences look like reasoning errors. Journal of behavioral decision making 12, 4 (1999), 275--305. https://doi.org/10.1002/(sici)1099-0771(199912)12:4%3C275::aid-bdm323%3E3.0.co;2-m
    [24]
    Kenneth Holstein and Vincent Aleven. 2022. Designing for human--AI complementarity in K-12 education. AI Magazine 43, 2 (jun 2022), 239--248. https://doi.org/10.1002/aaai.12058
    [25]
    Abigail Z Jacobs and Hanna Wallach. 2021. Measurement and fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 375--385. https://doi.org/10.1145/3442188.3445901
    [26]
    Neal Jean, Marshall Burke, Michael Xie, W. Matthew Davis, David B. Lobell, and Stefano Ermon. 2016. Combining satellite imagery and machine learning to predict poverty. Science 353, 6301 (Aug 2016), 790--794. https://doi.org/10.1126/science.aaf7894
    [27]
    Andrei Kapishnikov, Tolga Bolukbasi, Fernanda Viégas, and Michael Terry. 2019. XRAI: Better Attributions Through Regions. https://arxiv.org/abs/1906.02825 arXiv: 1906.02825.
    [28]
    Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--14. https://doi.org/10.1145/3313831.3376219
    [29]
    Asim B. Khajwal and Arash Noshadravan. 2021. An uncertainty-aware framework for reliable disaster damage assessment via crowdsourcing. International Journal of Disaster Risk Reduction 55 (Mar 2021), 102110. https://doi.org/10.1016/j.ijdrr.2021.102110
    [30]
    Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. 2018. Human decisions and machine predictions. The quarterly journal of economics 133, 1 (2018), 237--293. https://doi.org/10.3386/w23180
    [31]
    Vivian Lai, Chacha Chen, Q Vera Liao, Alison Smith-Renner, and Chenhao Tan. 2021. Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies. arXiv preprint arXiv:2112.11471 (2021). https://arxiv.org/abs/2112.11471
    [32]
    Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. 2017. Building machines that learn and think like people. Behavioral and brain sciences 40 (2017). https://doi.org/10.1017/S0140525X16001837
    [33]
    Himabindu Lakkaraju and Osbert Bastani. 2020. "How do I fool you?" Manipulating User Trust via Misleading Black Box Explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 79--85. https://doi.org/10.1145/3375627.3375833
    [34]
    Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Eric Horvitz. 2017. Identifying unknown unknowns in the open world: Representations and policies for guided exploration. In Thirty-first aaai conference on artificial intelligence. https://doi.org/10.1609/aaai.v31i1.10821
    [35]
    Oran Lang, Yossi Gandelsman, Michal Yarom, Yoav Wald, Gal Elidan, Avinatan Hassidim, William T. Freeman, Phillip Isola, Amir Globerson, Michal Irani, and Inbar Mosseri. 2021. Explaining in Style: Training a GAN to explain a classifier in StyleSpace. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE. https://doi.org/10.1109/iccv48922.2021.00073
    [36]
    John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human factors 46, 1 (2004), 50--80. https://doi.org/10.1518/hfes.46.1.50.30392
    [37]
    Q. Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, 1--15. https://doi.org/10.1145/3313831.3376590
    [38]
    Q. Vera Liao and Kush R. Varshney. 2021. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. (Dec 2021). http://arxiv.org/abs/2110.10790 arXiv: 2110.10790.
    [39]
    Zhong Qiu Lin, Mohammad Javad Shafiee, Stanislav Bochkarev, Michael St Jules, Xiao Yu Wang, and Alexander Wong. 2019. Do explanations reflect decisions? A machine-centric strategy to quantify the performance of explainability algorithms. arXiv preprint arXiv:1910.07387 (2019). https://arxiv.org/abs/1910.07387
    [40]
    Zachary C Lipton. 2018. The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 16, 3 (2018), 31--57. https://doi.org/10.1145/3236386.3241340
    [41]
    Scott M. Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Long Beach, California, USA) (NIPS'17). Curran Associates Inc., Red Hook, NY, USA, 4768--4777. https://dl.acm.org/doi/pdf/10.5555/3295222.3295230
    [42]
    Sahar S. Matin and Biswajeet Pradhan. 2021. Earthquake-Induced Building-Damage Mapping Using Explainable AI (XAI). Sensors 21, 13 (2021). https://doi.org/10.3390/s21134489
    [43]
    Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1--38. https://doi.org/10.1016/j.artint.2018.07.007
    [44]
    Christoph Molnar. 2019. Model-Agnostic Methods. Lulu.
    [45]
    Mahsan Nourani, Samia Kabir, Sina Mohseni, and Eric D. Ragan. 2019. The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent Systems. 7 (Oct 2019), 97--105. https://ojs.aaai.org/index.php/HCOMP/article/view/5284
    [46]
    Chris Olah, Alexander Mordvintsev, and Ludwig Schubert. 2017. Feature Visualization. Distill 2, 11 (Nov 2017), e7. https://doi.org/10.23915/distill.00007
    [47]
    Barak Oshri, Annie Hu, Peter Adelson, Xiao Chen, Pascaline Dupas, Jeremy Weinstein, Marshall Burke, David Lobell, and Stefano Ermon. 2018. Infrastructure Quality Assessment in Africa using Satellite Imagery and Deep Learning. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (Jul 2018), 616--625. https://doi.org/10.1145/3219819.3219924
    [48]
    Bhavik N Patel, Louis Rosenberg, Gregg Willcox, David Baltaxe, Mimi Lyons, Jeremy Irvin, Pranav Rajpurkar, Timothy Amrhein, Rajan Gupta, Safwan Halabi, et al . 2019. Human--machine partnership with artificial intelligence for chest radiograph diagnosis. NPJ digital medicine 2, 1 (2019), 1--10. https://doi.org/10.1038/s41746-019-0189--7
    [49]
    Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--52. https://doi.org/10.1145/3411764.3445315
    [50]
    Mauricio Reyes, Raphael Meier, Sérgio Pereira, Carlos A Silva, Fried-Michael Dahlweid, Hendrik von Tengg-Kobligk, Ronald M Summers, and Roland Wiest. 2020. On the interpretability of artificial intelligence in radiology: challenges and opportunities. Radiology: Artificial Intelligence 2, 3 (2020), e190043. https://doi.org/10.1148/ryai.2020190043
    [51]
    Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '16). 1135--1144. https://doi.org/10.1145/2939672.2939778
    [52]
    Adriel Saporta, Xiaotong Gui, Ashwin Agrawal, Anuj Pareek, Steven QH Truong, Chanh DT Nguyen, Van-Doan Ngo, Jayne Seekins, Francis G. Blankenberg, Andrew Y. Ng, Matthew P. Lungren, and Pranav Rajpurkar. 2021. Deep learning saliency maps do not accurately highlight diagnostically relevant regions for medical image interpretation. medRxiv (Mar 2021). https://doi.org/10.1101/2021.02.28.21252634v1
    [53]
    Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2020. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. International Journal of Computer Vision 128, 2 (Feb 2020), 336--359. https://doi.org/10.1007/s11263-019-01228--7
    [54]
    Rita Sevastjanova, Fabian Beck, Basil Ell, Cagatay Turkay, Rafael Henkin, Miriam Butt, Daniel A Keim, and Mennatallah El-Assady. 2018. Going beyond visualization: Verbalization as complementary medium to explain machine learning models. (Oct 2018). https://bib.dbvis.de/uploadedFiles/GoingbeyondVisualizationVerbalizationasComplementaryMediumtoExplainMachineLearningModels.pdf
    [55]
    Hidehiko Shishido, Koyo Kobayashi, Yoshinari Kameda, and Itaru Kitahara. 2021. Method to Generate Building Damage Maps by Combining Aerial Image Processing and Crowdsourcing. Journal of Disaster Research 16, 5 (Aug 2021), 827--839. https://doi.org/10.20965/jdr.2021.p0827
    [56]
    Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. (Apr 2014). http://arxiv.org/abs/1312.6034 arXiv: 1312.6034.
    [57]
    Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. 2020. Fooling lime and shap: Adversarial attacks on post hoc explanation methods. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 180--186. https://doi.org/10.1145/3375627.3375830
    [58]
    Richard Szeliski. 2010. Computer vision: algorithms and applications. Springer Science & Business Media. https://doi.org/10.1007/978--3-030--34372--9
    [59]
    Jerome Theau. 2011. Change detection. In Springer Handbook of Geographic Information. Springer, 75--94. https://doi.org/10.1007/978-0--387--35973--1_129
    [60]
    Defense Innovation Unit. 2021. U.S. Government and Nonprofit Organization Host Prize Competition to Leverage the Latest Technology to Detect and Defeat Illegal Fishing. https://www.diu.mil/latest/us-government-and-nonprofit-organization-host-prize-competition-xview3
    [61]
    Oleksandra Vereschak, Gilles Bailly, and Baptiste Caramiaux. 2021. How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (Oct 2021), 327:1--327:39. https://doi.org/10.1145/3476068
    [62]
    Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2018. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. arXiv:1711.00399 [cs] (Mar 2018). http://arxiv.org/abs/1711.00399 arXiv:1711.00399.
    [63]
    Dakuo Wang, Elizabeth Churchill, Pattie Maes, Xiangmin Fan, Ben Shneiderman, Yuanchun Shi, and Qianying Wang. 2020. From Human-Human Collaboration to Human-AI Collaboration: Designing AI Systems That Can Work Together with People. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI EA '20). Association for Computing Machinery, New York, NY, USA, 1--6. https://doi.org/10.1145/3334480.3381069
    [64]
    Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1--15. https://doi.org/10.1145/3290605.3300831
    [65]
    Xinru Wang and Ming Yin. 2021. Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. Association for Computing Machinery, 318--328. https://doi.org/10.1145/3397481.3450650
    [66]
    Christopher Yeh, Anthony Perez, Anne Driscoll, George Azzari, Zhongyi Tang, David Lobell, Stefano Ermon, and Marshall Burke. 2020. Using publicly available satellite imagery and deep learning to understand economic well-being in Africa. Nature Communications 11, 1 (May 2020), 2583. https://doi.org/10.1038/s41467-020--16185-w
    [67]
    Yunfeng Zhang, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making. (Jan 2020). https://doi.org/10.1145/3351095.3372852
    [68]
    Alexandra Zytek, Dongyu Liu, Rhema Vaithianathan, and Kalyan Veeramachaneni. 2021. Sibyl: Understanding and Addressing the Usability Challenges of Machine Learning In High-Stakes Decision Making. (Sep 2021). http://arxiv.org/abs/2103.02071 arXiv: 2103.02071.

    Cited By

    View all
    • (2024)The Impact of Imperfect XAI on Human-AI Decision-MakingProceedings of the ACM on Human-Computer Interaction10.1145/36410228:CSCW1(1-39)Online publication date: 26-Apr-2024
    • (2024)To Search or To Gen? Exploring the Synergy between Generative AI and Web Search in ProgrammingExtended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650867(1-8)Online publication date: 11-May-2024
    • (2024)Advancing Patient-Centered Shared Decision-Making with AI Systems for Older Adult Cancer PatientsProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642353(1-20)Online publication date: 11-May-2024
    • Show More Cited By

    Index Terms

    1. Evaluating the Impact of Human Explanation Strategies on Human-AI Visual Decision-Making

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image Proceedings of the ACM on Human-Computer Interaction
        Proceedings of the ACM on Human-Computer Interaction  Volume 7, Issue CSCW1
        CSCW
        April 2023
        3836 pages
        EISSN:2573-0142
        DOI:10.1145/3593053
        Issue’s Table of Contents
        This work is licensed under a Creative Commons Attribution-NoDerivatives International 4.0 License.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 16 April 2023
        Published in PACMHCI Volume 7, Issue CSCW1

        Check for updates

        Author Tags

        1. explanation generation
        2. human-AI collaboration
        3. human-centered explainable ai

        Qualifiers

        • Research-article

        Funding Sources

        • Center for Machine Learning and Health - Internal Funding

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)789
        • Downloads (Last 6 weeks)80
        Reflects downloads up to 26 Jul 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)The Impact of Imperfect XAI on Human-AI Decision-MakingProceedings of the ACM on Human-Computer Interaction10.1145/36410228:CSCW1(1-39)Online publication date: 26-Apr-2024
        • (2024)To Search or To Gen? Exploring the Synergy between Generative AI and Web Search in ProgrammingExtended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650867(1-8)Online publication date: 11-May-2024
        • (2024)Advancing Patient-Centered Shared Decision-Making with AI Systems for Older Adult Cancer PatientsProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642353(1-20)Online publication date: 11-May-2024
        • (2024)Mapswipe for SDGs 3 & 13: take urgent cartographic action to combat heat vulnerability of manufactured and mobile home communitiesInternational Journal of Cartography10.1080/23729333.2024.2359074(1-23)Online publication date: 2-Jul-2024
        • (2024)Majority voting of doctors improves appropriateness of AI reliance in pathologyInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103315190(103315)Online publication date: Oct-2024
        • (undefined)Reassuring, Misleading, Debunking: Comparing Effects of XAI Methods on Human DecisionsACM Transactions on Interactive Intelligent Systems10.1145/3665647

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Get Access

        Login options

        Full Access

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media