Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3630106.3659011acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

MiMICRI: Towards Domain-centered Counterfactual Explanations of Cardiovascular Image Classification Models

Published: 05 June 2024 Publication History
  • Get Citation Alerts
  • Abstract

    The recent prevalence of publicly accessible, large medical imaging datasets has led to a proliferation of artificial intelligence (AI) models for cardiovascular image classification and analysis. At the same time, the potentially significant impacts of these models have motivated the development of a range of explainable AI (XAI) methods that aim to explain model predictions given certain image inputs. However, many of these methods are not developed or evaluated with domain experts, and explanations are not contextualized in terms of medical expertise or domain knowledge. In this paper, we propose a novel framework and python library, MiMICRI, that provides domain-centered counterfactual explanations of cardiovascular image classification models. MiMICRI helps users interactively select and replace segments of medical images that correspond to morphological structures. From the counterfactuals generated, users can then assess the influence of each segment on model predictions, and validate the model against known medical facts. We evaluate this library with two medical experts. Our evaluation demonstrates that a domain-centered XAI approach can enhance the interpretability of model explanations, and help experts reason about models in terms of relevant domain knowledge. However, concerns were also surfaced about the clinical plausibility of the counterfactuals generated. We conclude with a discussion on the generalizability and trustworthiness of the MiMICRI framework, as well as the implications of our findings on the development of domain-centered XAI methods for model interpretability in healthcare contexts.

    References

    [1]
    Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity checks for saliency maps. Advances in neural information processing systems 31 (2018).
    [2]
    AS Albahri, Ali M Duhaim, Mohammed A Fadhel, Alhamzah Alnoor, Noor S Baqer, Laith Alzubaidi, OS Albahri, AH Alamoodi, Jinshuai Bai, Asma Salhi, 2023. A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion. Information Fusion (2023).
    [3]
    Anna Markella Antoniadi, Yuhan Du, Yasmine Guendouz, Lan Wei, Claudia Mazo, Brett A Becker, and Catherine Mooney. 2021. Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: a systematic review. Applied Sciences 11, 11 (2021), 5088.
    [4]
    Wenjia Bai, Matthew Sinclair, Giacomo Tarroni, Ozan Oktay, Martin Rajchl, Ghislain Vaillant, Aaron M Lee, Nay Aung, Elena Lukaschuk, Mihir M Sanghvi, 2018. Automated cardiovascular magnetic resonance image analysis with fully convolutional networks. Journal of Cardiovascular Magnetic Resonance 20, 1 (2018), 1–12.
    [5]
    Wenjia Bai, Hideaki Suzuki, Chen Qin, Giacomo Tarroni, Ozan Oktay, Paul M Matthews, and Daniel Rueckert. 2018. Recurrent neural networks for aortic image sequence segmentation with sparse annotations. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part IV 11. Springer, 586–594.
    [6]
    Blair Bilodeau, Natasha Jaques, Pang Wei Koh, and Been Kim. 2024. Impossibility theorems for feature attribution. Proceedings of the National Academy of Sciences 121, 2 (2024), e2304406120.
    [7]
    Michael Bostock, Vadim Ogievetsky, and Jeffrey Heer. 2011. D3 data-driven documents. IEEE transactions on visualization and computer graphics 17, 12 (2011), 2301–2309.
    [8]
    Carrie J. Cai, Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, Fernanda Viegas, Greg S. Corrado, Martin C. Stumpe, and Michael Terry. 2019. Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making. In Proc. Conference on Human Factors in Computing Systems(CHI ’19). ACM, 4–14. https://doi.org/10.1145/3290605.3300234
    [9]
    Stephen Casper, Tong Bu, Yuxiao Li, Jiawei Li, Kevin Zhang, Kaivalya Hariharan, and Dylan Hadfield-Menell. 2023. Red teaming deep neural networks with feature synthesis tools. In Thirty-seventh Conference on Neural Information Processing Systems.
    [10]
    Robert Challen, Joshua Denny, Martin Pitt, Luke Gompels, Tom Edwards, and Krasimira Tsaneva-Atanasova. 2019. Artificial intelligence, bias and clinical safety. BMJ Quality & Safety 28, 3 (2019), 231–237.
    [11]
    C. Chen, J. Yuan, Y. Lu, Y. Liu, H. Su, S. Yuan, and S. Liu. 2020. OoDAnalyzer: Interactive Analysis of Out-of-Distribution Samples. IEEE Transactions on Visualization and Computer Graphics (2020), 1–1. https://doi.org/10.1109/TVCG.2020.2973258
    [12]
    Haomin Chen, Catalina Gomez, Chien-Ming Huang, and Mathias Unberath. 2022. Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review. npj Digital Medicine 5, 1 (Oct. 2022), 1–15.
    [13]
    Furui Cheng, Yao Ming, and Huamin Qu. 2020. Dece: Decision explorer with counterfactual explanations for machine learning models. IEEE Transactions on Visualization and Computer Graphics 27, 2 (2020), 1438–1447.
    [14]
    Christine M Cutillo, Karlie R Sharma, Luca Foschini, Shinjini Kundu, Maxine Mackintosh, Kenneth D Mandl, and MI in Healthcare Workshop Working Group Beck Tyler 1 Collier Elaine 1 Colvis Christine 1 Gersing Kenneth 1 Gordon Valery 1 Jensen Roxanne 8 Shabestari Behrouz 9 Southall Noel 1. 2020. Machine intelligence in healthcare—perspectives on trustworthiness, explainability, usability, and transparency. NPJ digital medicine 3, 1 (2020), 47.
    [15]
    Terrance DeVries and Graham W Taylor. 2017. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552 (2017).
    [16]
    Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. 2018. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition. 9185–9193.
    [17]
    Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, and Mark O Riedl. 2019. Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 263–274.
    [18]
    Samuel G Finlayson, John D Bowers, Joichi Ito, Jonathan L Zittrain, Andrew L Beam, and Isaac S Kohane. 2019. Adversarial attacks on medical machine learning. Science 363, 6433 (2019), 1287–1289.
    [19]
    Carissa G. Fonseca, Michael Backhaus, David A. Bluemke, Randall D. Britten, Jae Do Chung, Brett R. Cowan, Ivo D. Dinov, J. Paul Finn, Peter J. Hunter, Alan H. Kadish, Daniel C. Lee, Joao A. C. Lima, Pau Medrano-Gracia, Kalyanam Shivkumar, Avan Suinesiaputra, Wenchao Tao, and Alistair A. Young. 2011. The Cardiac Atlas Project–an imaging database for computational modeling and statistical atlases of the heart. Bioinformatics 27, 16 (Aug. 2011), 2288–2295.
    [20]
    Oscar Gomez, Steffen Holter, Jun Yuan, and Enrico Bertini. 2020. Vice: Visual counterfactual explanations for machine learning models. In Proceedings of the 25th International Conference on Intelligent User Interfaces. 531–535.
    [21]
    Oscar Gomez, Steffen Holter, Jun Yuan, and Enrico Bertini. 2021. AdViCE: Aggregated Visual Counterfactual Explanations for Machine Learning Model Validation. In 2021 IEEE Visualization Conference (VIS). IEEE, 31–35.
    [22]
    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.
    [23]
    Fred Hohman, Andrew Head, Rich Caruana, Robert DeLine, and Steven M Drucker. 2019. Gamut: A design probe to understand how data scientists understand machine learning models. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–13.
    [24]
    Jinbin Huang, Aditi Mishra, Bum Chul Kwon, and Chris Bryan. 2023. ConceptExplainer: Interactive Explanation for Deep Neural Networks from a Concept Perspective. IEEE Transactions on Visualization and Computer Graphics 29, 1 (2023), 831–841. https://doi.org/10.1109/TVCG.2022.3209384
    [25]
    Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. 2017. Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284 (2017).
    [26]
    Paul Jacob, Éloi Zablocki, Hedi Ben-Younes, Mickaël Chen, Patrick Pérez, and Matthieu Cord. 2022. STEEX: steering counterfactual explanations with semantics. In European Conference on Computer Vision. Springer, 387–403.
    [27]
    Mark T Keane, Eoin M Kenny, Eoin Delaney, and Barry Smyth. 2021. If only we had better counterfactual explanations: Five key deficits to rectify in the evaluation of counterfactual xai techniques. arXiv preprint arXiv:2103.01035 (2021).
    [28]
    Mark T Keane and Barry Smyth. 2020. Good counterfactuals and where to find them: A case-based technique for generating counterfactuals for explainable AI (XAI). In Case-Based Reasoning Research and Development: 28th International Conference, ICCBR 2020, Salamanca, Spain, June 8–12, 2020, Proceedings 28. Springer, 163–178.
    [29]
    Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, 2018. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning. PMLR, 2668–2677.
    [30]
    Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, 2023. Segment anything. arXiv preprint arXiv:2304.02643 (2023).
    [31]
    R. Krueger, J. Beyer, W.-D. Jang, N. W. Kim, A. Sokolov, P. K. Sorger, and H. Pfister. 2020. Facetto: Combining Unsupervised and Supervised Learning for Hierarchical Phenotype Analysis in Multi-Channel Image Data. IEEE Transactions on Visualization and Computer Graphics 26, 1 (2020), 227–237. https://doi.org/10.1109/TVCG.2019.2934547
    [32]
    Bum Chul Kwon, Jungsoo Lee, Chaeyeon Chung, Nyoungwoo Lee, Ho-Jin Choi, and Jaegul Choo. 2022. DASH: Visual Analytics for Debiasing Image Classification via User-Driven Synthetic Data Augmentation. In EuroVis 2022 - Short Papers, Marco Agus, Wolfgang Aigner, and Thomas Hoellt (Eds.). The Eurographics Association. https://doi.org/10.2312/evs.20221099
    [33]
    Vivian Lai, Yiming Zhang, Chacha Chen, Q Vera Liao, and Chenhao Tan. 2023. Selective explanations: Leveraging human input to align explainable ai. arXiv preprint arXiv:2301.09656 (2023).
    [34]
    Emily LaRosa and David Danks. 2018. Impacts on trust of healthcare AI. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. 210–215.
    [35]
    Ho Hin Lee, Quan Liu, Shunxing Bao, Qi Yang, Xin Yu, Leon Y Cai, Thomas Z Li, Yuankai Huo, Xenofon Koutsoukos, and Bennett A Landman. 2023. Scaling up 3d kernels with bayesian frequency re-parameterization for medical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 632–641.
    [36]
    Min Hun Lee and Chong Jun Chew. 2023. Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Collaborative Clinical Decision Making. Proceedings of the ACM on Human-Computer Interaction 7, CSCW2 (2023), 1–22.
    [37]
    Xiaoxiao Li, Nicha C. Dvornek, Juntang Zhuang, Pamela Ventola, and James S. Duncan. 2018. Brain Biomarker Interpretation in ASD Using Deep Learning and fMRI. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2018(Lecture Notes in Computer Science), Alejandro F. Frangi, Julia A. Schnabel, Christos Davatzikos, Carlos Alberola-López, and Gabor Fichtinger (Eds.). Springer International Publishing, Cham, 206–214. https://doi.org/10.1007/978-3-030-00931-1_24
    [38]
    M. Liu, S. Liu, H. Su, K. Cao, and J. Zhu. 2018. Analyzing the Noise Robustness of Deep Neural Networks. In IEEE Conference on Visual Analytics Science and Technology(VAST’18). 60–71. https://doi.org/10.1109/VAST.2018.8802509
    [39]
    M. Liu, J. Shi, K. Cao, J. Zhu, and S. Liu. 2018. Analyzing the Training Processes of Deep Generative Models. IEEE Transactions on Visualization and Computer Graphics 24, 1 (2018), 77–87. https://doi.org/10.1109/TVCG.2017.2744938
    [40]
    M. Liu, J. Shi, Z. Li, C. Li, J. Zhu, and S. Liu. 2017. Towards Better Analysis of Deep Convolutional Neural Networks. IEEE Transactions on Visualization and Computer Graphics 23, 1 (2017), 91–100. https://doi.org/10.1109/TVCG.2016.2598831
    [41]
    Qin Liu, Zhenlin Xu, Gedas Bertasius, and Marc Niethammer. 2023. Simpleclick: Interactive image segmentation with simple vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 22290–22300.
    [42]
    Jun Ma, Yuting He, Feifei Li, Lin Han, Chenyu You, and Bo Wang. 2024. Segment anything in medical images. Nature Communications 15, 1 (2024), 654.
    [43]
    Silvan Mertes, Tobias Huber, Katharina Weitz, Alexander Heimerl, and Elisabeth André. 2022. Ganterfactual—counterfactual explanations for medical non-experts using generative adversarial learning. Frontiers in artificial intelligence 5 (2022), 825565.
    [44]
    D Douglas Miller. 2019. The medical AI insurgency: what physicians must know about data to practice with intelligent machines. NPJ digital medicine 2, 1 (2019), 62.
    [45]
    Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38.
    [46]
    Thomas P Quinn, Manisha Senadeera, Stephan Jacobs, Simon Coghlan, and Vuong Le. 2021. Trust and medical AI: the challenges we face and the expertise needed to overcome them. Journal of the American Medical Informatics Association 28, 4 (2021), 890–894.
    [47]
    Hanna Ragnarsdottir, Laura Manduchi, Holger Michel, Fabian Laumer, Sven Wellmann, Ece Ozkan, and Julia E. Vogt. 2022. Interpretable Prediction of Pulmonary Hypertension in Newborns Using Echocardiograms. In Pattern Recognition(Lecture Notes in Computer Science), Björn Andres, Florian Bernard, Daniel Cremers, Simone Frintrop, Bastian Goldlücke, and Ivo Ihrke (Eds.). Springer International Publishing, Cham, 529–542. https://doi.org/10.1007/978-3-031-16788-1_32
    [48]
    Pouria Rouzrokh, Bardia Khosravi, Sanaz Vahdati, Mana Moassefi, Shahriar Faghani, Elham Mahmoudi, Hamid Chalian, and Bradley J. Erickson. 2023. Machine Learning in Cardiovascular Imaging: A Scoping Review of Published Literature. Current Radiology Reports 11, 2 (Feb. 2023), 34–45. https://doi.org/10.1007/s40134-022-00407-8
    [49]
    Ahmed Salih, Ilaria Boscolo Galazzo, Polyxeni Gkontra, Aaron Mark Lee, Karim Lekadir, Zahra Raisi-Estabragh, and Steffen E. Petersen. 2023. Explainable Artificial Intelligence and Cardiac Imaging: Toward More Interpretable Models. Circulation. Cardiovascular Imaging 16, 4 (April 2023), e014519.
    [50]
    Ludwig Schallner, Johannes Rabold, Oliver Scholz, and Ute Schmid. 2020. Effect of superpixel aggregation on explanations in lime–a case study with biological data. In Machine Learning and Knowledge Discovery in Databases: International Workshops of ECML PKDD 2019, Würzburg, Germany, September 16–20, 2019, Proceedings, Part I. Springer, 147–158.
    [51]
    Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision. 618–626.
    [52]
    Ramprasaath R Selvaraju, Abhishek Das, Ramakrishna Vedantam, Michael Cogswell, Devi Parikh, and Dhruv Batra. 2016. Grad-CAM: Why did you say that?arXiv preprint arXiv:1611.07450 (2016).
    [53]
    Maxime Sermesant, Hervé Delingette, Hubert Cochet, Pierre Jaïs, and Nicholas Ayache. 2021. Applications of artificial intelligence in cardiovascular imaging. Nature Reviews Cardiology 18, 8 (Aug. 2021), 600–609. https://doi.org/10.1038/s41569-021-00527-2
    [54]
    Peilun Shi, Jianing Qiu, Sai Mu Dalike Abaxi, Hao Wei, Frank P-W Lo, and Wu Yuan. 2023. Generalist vision foundation models for medical imaging: A case study of segment anything model on zero-shot medical segmentation. Diagnostics 13, 11 (2023), 1947.
    [55]
    Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013).
    [56]
    Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
    [57]
    Fabian Sperrle, Mennatallah El-Assady, Grace Guo, Rita Borgo, D Horng Chau, Alex Endert, and Daniel Keim. 2021. A Survey of Human-Centered Evaluations in Human-Centered Machine Learning. In Computer Graphics Forum, Vol. 40. Wiley Online Library, 543–568.
    [58]
    Cathie Sudlow, John Gallacher, Naomi Allen, Valerie Beral, Paul Burton, John Danesh, Paul Downey, Paul Elliott, Jane Green, Martin Landray, Bette Liu, Paul Matthews, Giok Ong, Jill Pell, Alan Silman, Alan Young, Tim Sprosen, Tim Peakman, and Rory Collins. 2015. UK Biobank: An Open Access Resource for Identifying the Causes of a Wide Range of Complex Diseases of Middle and Old Age. PLOS Medicine 12, 3 (March 2015), e1001779.
    [59]
    Animesh Tandon, Navina Mohan, Cory Jensen, Barbara EU Burkhardt, Vasu Gooty, Daniel A Castellanos, Paige L McKenzie, Riad Abou Zahr, Abhijit Bhattaru, Mubeena Abdulkarim, 2021. Retraining convolutional neural networks for specialized cardiovascular imaging tasks: lessons from tetralogy of fallot. Pediatric cardiology 42, 3 (2021), 578–589.
    [60]
    Kazuki Uehara, Masahiro Murakawa, Hirokazu Nosato, and Hidenori Sakanashi. 2020. Prototype-Based Interpretation of Pathological Image Analysis by Convolutional Neural Networks. In Pattern Recognition(Lecture Notes in Computer Science), Shivakumara Palaiahnakote, Gabriella Sanniti di Baja, Liang Wang, and Wei Qi Yan (Eds.). Springer International Publishing, Cham, 640–652. https://doi.org/10.1007/978-3-030-41299-9_50
    [61]
    Marly van Assen, Alexander C Razavi, Seamus P Whelton, and Carlo N De Cecco. 2023. Artificial intelligence in cardiac imaging: where we are and what we want. European Heart Journal 44, 7 (Feb. 2023), 541–543. https://doi.org/10.1093/eurheartj/ehac700
    [62]
    Bas H. M. van der Velden, Hugo J. Kuijf, Kenneth G. A. Gilhuijs, and Max A. Viergever. 2022. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Medical Image Analysis 79 (July 2022), 102470.
    [63]
    Simon Vandenhende, Dhruv Mahajan, Filip Radenovic, and Deepti Ghadiyaram. 2022. Making Heads or Tails: Towards Semantically Consistent Visual Counterfactuals. In ECCV 2022.
    [64]
    Alfredo Vellido. 2020. The importance of interpretability and visualization in machine learning for applications in medicine and health care. Neural computing and applications 32, 24 (2020), 18069–18083.
    [65]
    Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech. 31 (2017), 841.
    [66]
    J. Wang, L. Gou, H. Shen, and H. Yang. 2019. DQNViz: A Visual Analytics Approach to Understand Deep Q-Networks. IEEE Transactions on Visualization and Computer Graphics 25, 1 (2019), 288–298. https://doi.org/10.1109/TVCG.2018.2864504
    [67]
    J. Wang, L. Gou, H. Yang, and H. Shen. 2018. GANViz: A Visual Analytics Approach to Understand the Adversarial Game. IEEE Transactions on Visualization and Computer Graphics 24, 6 (2018), 1905–1917. https://doi.org/10.1109/TVCG.2018.2816223
    [68]
    J. Wang, L. Gou, W. Zhang, H. Yang, and H. Shen. 2019. DeepVID: Deep Visual Interpretation and Diagnosis for Image Classifiers via Knowledge Distillation. IEEE Transactions on Visualization and Computer Graphics 25, 6 (2019), 2168–2180. https://doi.org/10.1109/TVCG.2019.2903943
    [69]
    James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viégas, and Jimbo Wilson. 2019. The what-if tool: Interactive probing of machine learning models. IEEE transactions on visualization and computer graphics 26, 1 (2019), 56–65.
    [70]
    Yao Xie, Melody Chen, David Kao, Ge Gao, and Xiang ’Anthony’ Chen. 2020. CheXplain: Enabling Physicians to Explore and Understand Data-Driven, AI-Enabled Medical Imaging Analysis. In Proc. Conference on Human Factors in Computing Systems(CHI’20). ACM, 1–13. https://doi.org/10.1145/3313831.3376807
    [71]
    Guang Yang, Qinghao Ye, and Jun Xia. 2022. Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. Information Fusion 77 (2022), 29–52.
    [72]
    Feiyang Yu, Alex Moehring, Oishi Banerjee, Tobias Salz, Nikhil Agarwal, and Pranav Rajpurkar. 2024. Heterogeneity and predictors of the effects of AI assistance on radiologists. Nature Medicine (2024), 1–13.
    [73]
    Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. 2019. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF international conference on computer vision. 6023–6032.
    [74]
    Mehdi Zemni, Mickaël Chen, Éloi Zablocki, Hédi Ben-Younes, Patrick Pérez, and Matthieu Cord. 2023. OCTET: Object-aware Counterfactual Explanations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 15062–15071.
    [75]
    Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. 2017. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 (2017).
    [76]
    Huijie Zhang, Jialu Dong, Cheng Lv, Yiming Lin, and Jinghan Bai. 2023. Visual analytics of potential dropout behavior patterns in online learning based on counterfactual explanation. Journal of Visualization 26, 3 (2023), 723–741.
    [77]
    Jie Zhang and Zong-ming Zhang. 2023. Ethics and governance of trustworthy medical artificial intelligence. BMC Medical Informatics and Decision Making 23, 1 (2023), 7.

    Index Terms

    1. MiMICRI: Towards Domain-centered Counterfactual Explanations of Cardiovascular Image Classification Models

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image ACM Other conferences
          FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency
          June 2024
          2580 pages
          ISBN:9798400704505
          DOI:10.1145/3630106
          This work is licensed under a Creative Commons Attribution-ShareAlike International 4.0 License.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 05 June 2024

          Check for updates

          Author Tags

          1. counterfactual explanation
          2. explainable AI
          3. human-centered AI
          4. interactive visualizations

          Qualifiers

          • Research-article
          • Research
          • Refereed limited

          Funding Sources

          Conference

          FAccT '24

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • 0
            Total Citations
          • 98
            Total Downloads
          • Downloads (Last 12 months)98
          • Downloads (Last 6 weeks)55
          Reflects downloads up to 09 Aug 2024

          Other Metrics

          Citations

          View Options

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format.

          HTML Format

          Get Access

          Login options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media