Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Explanatory pragmatism: a context-sensitive framework for explainable medical AI

Published: 01 March 2022 Publication History

Abstract

Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we seek to address in this paper. We outline a framework, called Explanatory Pragmatism, which we argue has two attractive features. First, it allows us to conceptualise explainability in explicitly context-, audience- and purpose-relative terms, while retaining a unified underlying definition of explainability. Second, it makes visible any normative disagreements that may underpin conflicting claims about explainability regarding the purposes for which explanations are sought. Third, it allows us to distinguish several dimensions of AI explainability. We illustrate this framework by applying it to a case study involving a machine learning model for predicting whether patients suffering disorders of consciousness were likely to recover consciousness.

References

[1]
Austin JL How to do things with words 1962 Clarendon Press
[2]
Benjamin R Assessing risk, automating racism Science 2019 366 421-422
[3]
Besold, T.R. and Uckelman, S.L. 2018. The what, the why, and the how of explanations in automated decision-making. arXiv:1808.07074
[4]
Biran, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable Artificial Intelligence (XAI). Accessed 1 July 2018. http://www.cs.columbia.edu/~orb/papers/xai_survey_paper_2017.pdf
[5]
Bjerring JC and Busch J Artificial intelligence and patient-centred decision-making Philosophy & Technology 2020
[6]
Brewer JA, Worhunsky PD, Gray JR, Tang Y, Weber J, and Kober H Meditation experience is associated with differences in default mode network activity and connectivity PNAS 2011 108 50 20254-20259
[7]
Buolamwini, J and Gebru, T. (2018) Gender shades: Intersectional accuracy disparities in commercial gender classification.” Proceedings of Machine Learning Research 81:1–15. Accessed 20 Apr 2021. https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.
[8]
Burrell J How the machine ‘thinks’: Understanding opacity in machine learning algorithms Big Data & Society 2016
[9]
Cai, C.J., Reif, E., Hegde, N., Hipp, J., Kim, B., Smilkov, D., Wattenberg, M., Viegas, F., Corrado, G.S., Stumpe, M.C. and Terry, M., 2019. Human-centered tools for coping with imperfect algorithms during medical decision-making. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-14).
[10]
Camburu, O.M., Giunchiglia, E., Foerster, J., Lukasiewicz, T. and Blunsom, P., 2019. Can I trust the explainer? Verifying post-hoc explanatory methods. arXiv preprint.arXiv:1910.02065
[11]
Cartwright N A philosopher’s view of the long road from RCTs to effectiveness The Lancet 2011 377 P1400-P1401
[12]
Cartwright N Presidential address: Will this policy work for you? Predicting effectiveness better: How philosophy helps Philosophy of Science 2013 79 973-989
[13]
Castelvecchi D Can we open the black box of AI? Nature 2016 538 7623 20-23
[14]
Chakraborty, S., Tomsett, R., Raghavendra, R., Harborne, D., Alzantot, M., Cerutti, F. et al 2018. Interpretability of deep learning models: A survey of results. 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI).
[15]
Chen, S. 2018. Doctors said the coma patients would never wake. AI said they would - and they did. South China Post. Accessed 1 July 2018. https://www.scmp.com/news/china/science/article/2163298/doctors-said-coma-patients-would-never-wake-ai-said-they-would
[16]
Chin-Yee B and Upshur R Clinical judgement in the era of big data and predictive analytics Journal of Evaluation in Clinical Practice 2018 24 638-645
[17]
Craver C Hütteman Andreas and Kaiser Marie The ontic conception of scientific explanation Explanation in the biological and historical sciences 2014 Springer
[18]
Crawford, K. 2017. The trouble with bias. NIPS 2017 keynote address. Retrieved 29 June 2021 from https://www.youtube.com/watch?v=fMym_BKWQzk.
[19]
de Regt H Understanding scientific understanding 2017 OUP
[20]
Doshi-Velez, F. and Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint,arXiv:1702.08608
[21]
Erasmus A, Brunet TDP, and Fish E What is interpretability? Philosophy & Technology 2020
[22]
Felten, E. 2017 What does it mean to ask for an ‘explainable’ algorithm?”, Freedom to Tinker (blog), 31 May 2017. Accessed 1 Aug 2019. https://freedom-to-tinker.com/2017/05/31/what-does-it-mean-to-ask-for-an-explainable-algorithm/
[23]
Franco PL Speech act theory and the multiple aims of science Philosophy of Science 2019 86 1005-1015
[24]
Genin K and Grote T Randomized controlled trials in medical AI: A methodological critique Philosophy of Medicine 2021 2 1-15
[25]
Ghorbani, A., Wexler, J., Zou, J. and Kim, B. 2019. Towards automatic concept-based explanations. arXiv preprint.arXiv:1902.03129
[26]
Gil, Yolanda (2021) ‘Accelerate programme: An AI revolution in science? Using machine learning for scientific discovery’ [Panel Discussion]. University of Cambridge. 26 April.
[27]
Gray, A. 2018 7 Amazing ways artificial intelligence is used in healthcare, World Economic Forum, 20 September 2018. Accessed 1 July 2018. https://www.weforum.org/agenda/2018/09/7-amazing-ways-artificial-intelligence-is-used-in-healthcare
[28]
Guidotto R, Monreale A, Ruggieri S, Turini F, Giannotti F, and Pedreschi D A survey of methods for explaining black box models ACM Computing Surveys 2018 51 93
[29]
Gunning D and Aha DW DARPA’s explainable artificial intelligence (XAI) programme AI Magazine 2019 40 44-58
[30]
Harrison BJ, Pujol J, Lopez-Sola M, Hernandez-Ribas R, Deus J, Ortiz H, et al. Consistency and functional specialization in the default mode network PNAS 2008 105 9781-9786
[31]
Heaven, W. 2020. New standards for AI clinical trials will help spot snake oil and hype. MIT Technology Review. 11 September.
[32]
Heinrichs B and Eickhoff S Your evidence? Machine learning algorithms for medical diagnosis and prediction Human Brain Mapping 2020 41 1435-1444
[33]
UK House of Lords Select Committee on Artificial Intelligence. AI in the UK: Ready, willing and able? 2018. HL Paper 100. Accessed 1 July 2018. https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10002.htm
[34]
Jackson F and Petit P In defense of explanatory ecumenism Economics & Philosophy 1992 8 1-21
[35]
Jobin A, Ienca M, and Vayena E Artificial intelligence: The global landscape of ethics guidelines Nature Machine Intelligence 2019 1 389-399
[36]
Keeling, G., & Nyrup, R. manuscript. Explainable machine learning, clinical reasoning and patient autonomy. Unpublished manuscript under review.
[37]
Kelp C Understanding phenomena Synthese 2015 192 3799-3816
[38]
Khosrowi D Extrapolation of causal effects–hopes, assumptions, and the extrapolator’s circle Journal of Economic Methodology 2019 26 45-58
[39]
Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J. and Viegas, F., (2018). Interpretability beyond feature attrijbution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning (pp. 2668–2677). PMLR.
[40]
Kim, B. 2021. Interpretability for everyone [Lecture]. Oxford Applied and Theoretical Machine Learning Group.
[41]
Kirsch, A. 2017. Explain to whom? Putting the user in the center of explainable AI. In: Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML. Accessed 1 Aug 2019. https://hal.archives-ouvertes.fr/hal-01845135
[42]
Kitcher P and Salmon W Van Fraassen on explanation Journal of Philosophy 1987 84 315-330
[43]
Krishnan M Against interpretability: A critical examination of the interpretability problem in machine learning Philosophy & Technology 2019 33 487-502
[44]
Lawrence, N. 2020. Intellectual debt and the death of the programmer [Lecture]. University of Cambridge, Department of Engineering.
[45]
Leonelli S de Regt H, Leonelli S, and Eigner K Understanding in biology: the impure nature of biological understanding Scientific understanding: Philosophical perspectives 2009 University of Pittsburgh Press
[46]
Lipton, Z.C. 2017. The mythos of model interpretability. 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016). Accessed 1 July 2018. https://arxiv.org/abs/1606.03490
[47]
Liu X, Cruz Rivera S, Moher D, et al. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: The CONSORT-AI extension Nature Medicine 2020 26 1364-1374
[48]
London A Artificial intelligence and black-box medical decisions: Accuracy versus explainability The Hastings Center Report 2019 49 15-21
[49]
Marsland AL, Kuan CD, Sheu LK, Krajina K, Kraynak T, Manuck S, and Gianaros PJ Systemic inflammation and resting state connectivity of the default mode network Brain, Behaviour and Immunology 2017 62 162-170
[50]
Norman G Building on experience–the development of clinical reasoning New England Journal of Medicine 2006 355 2251-2252
[51]
Northcott R Big data and prediction: Four case studies Studies in the History and Philosophy of Science Part A 2020 81 96-104
[52]
Obermeyer Z, Powers B, Vogeli, and Mullainathan Dissecting racial bias in an algorithm used to manage the health of populations Science 2019 366 447-453
[53]
Pietsch W Aspects of theory-ladenness in data-intensive science Philosophy of Science 2015 82 905-916
[54]
Pietsch W The causal nature of modeling with big data Philosophy & Technology 2016 29 137-171
[55]
Posner J, Hellerstein DJ, Gat I, Mechling A, Klahr K, Wang Z, et al. Antidepressants normalize the default mode network in patients with dysthymia JAMA Psychiatry 2013 70 373-382
[56]
Potochnik A Scientific explanation: Putting communication first Philosophy of Science 2016 83 721-732
[57]
Selbst, A. and Barocas, S. 2018. The intuitive appeal of explainable machine. Fordham Law Review 87:1085-1139. Accessed 1 Aug 2019. https://ir.lawnet.fordham.edu/flr/vol87/iss3/11
[58]
Sendak, M., Elish, M.C., Gao, M., Futoma, J., Ratliff, W., Nichols, M., Bedoya, A., Balu, S. and O'Brien, C., (2020) "The human body is a black box" supporting clinical decision-making with deep learning. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 99–109).
[59]
Song M, Yang Y, He J, Yang Z, Yu S, Xie Q, et al. Prognostication of chronic disorders of consciousness using brain functional networks and clinical characteristics eLife 2018 7 e36173
[60]
Sripada R, Swain J, Evans GW, Welsh RC, and Liberzon I Childhood poverty and stress reactivity are associated with aberrant functional connectivity in default mode network Neuropsychopharmacology 2014 39 2244-2251
[61]
Steel D Across the boundaries: Extrapolation in biology and social science 2007 OUP
[62]
Sterelny K Explanatory pluralism in evolutionary biology Biology and Philosophy 1996 11 193-214
[63]
Stuart M et al. Stuart M et al. How thought experiments increase understanding The routledge companion to thought experiments 2018 Routledge
[64]
Sullivan E Understanding: Not know-how Philosophical Studies 2018 175 221-240
[65]
Sullivan E Understanding from machine learning models British Journal for the Philosophy of Science 2019
[66]
Tomsett, R., Braines, D., Harborne, D., Preece, A., and Chakraborty, S. (2018). Interpretable to whom? A role-based model for analyzing interpretable machine learning. 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018).arXiv:1806.07552
[67]
Van Fraassen B The scientific image 1980 Oxford University Press
[68]
Watson DS, Krutzinna J, Bruce I, Griffiths CEM, McInnes IB, Barnes MR, and Floridi L Clinical applications of machine learning: Beyond the black box BMJ 2019 2019 364
[69]
Weinberger, D. 2018. Optimization of explanation. Accessed 1 Aug 2018. https://medium.com/berkman-klein-center/optimization-over-explanation-41ecb135763d
[70]
Weisberg M Three kinds of idealization Journal of Philosophy 2007 104 639-659
[71]
Weller, A. 2017. Challenges for transparency. 2017 ICML Workshop on Human Interpretability in Machine Learning (WHI 2017)arXiv:1708.01870v1
[72]
Wilkenfeld D Understanding as representation manipulability Synthese 2013 190 997-1016
[73]
Wilkenfeld D Functional explaining: A new approach to the philosophy of explanation Synthese 2014 191 3367-3391
[74]
Wilkenfeld D MUDdy Understanding Synthese 2017 194 1273-1293
[75]
Wise T, Marwood L, Perkins AM, Herane-Vives A, Joules R, Lythgoe DJ, et al. Instability of default mode network connectivity in major depression: A two-sample confirmation study Translational Psychiatry 2017 7 e1105
[76]
Zednik C Solving the black box problem: A normative framework for explainable artificial intelligence Philosophy & Technology 2019
[77]
Zhang L, Zuo X, Ng KK, Chong JSX, Shim HY, Ong MQW, et al. Distinct BOLD variability changes in the default mode and salience networks in Alzheimer’s disease spectrum and associations with cognitive decline Scientific Reports 2020 10 6457
[78]
Zhang M, Savill N, Marguiles DS, Smallwood J, and Jefferies E Distinct individual differences in default mode network connectivity relate to off-task thought and text memory during reading Scientific Reports 2019 9 16220
[79]
Zittrain, J. 2019. Intellectual debt: With great power comes great ignorance. Medium, Retrieved July 24. https://medium.com/berkman-klein-center/from-technical-debt-to-intellectual-debt-in-ai-e05ac56a502c.

Cited By

View all
  • (2024)Mapping the landscape of ethical considerations in explainable AI researchEthics and Information Technology10.1007/s10676-024-09773-726:3Online publication date: 25-Jun-2024
  • (2023)Explainable AI and Causal Understanding: Counterfactual Approaches ConsideredMinds and Machines10.1007/s11023-023-09637-x33:2(347-377)Online publication date: 1-Jun-2023
  • (2022)Putting explainable AI in context: institutional explanations for medical AIEthics and Information Technology10.1007/s10676-022-09649-824:2Online publication date: 6-May-2022

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Ethics and Information Technology
Ethics and Information Technology  Volume 24, Issue 1
Mar 2022
215 pages

Publisher

Kluwer Academic Publishers

United States

Publication History

Published: 01 March 2022
Accepted: 05 January 2022

Author Tags

  1. Explainable artificial intelligence
  2. XAI
  3. Medical artificial intelligence
  4. Explanation
  5. Understanding
  6. Ethics of artificial intelligence

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 11 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Mapping the landscape of ethical considerations in explainable AI researchEthics and Information Technology10.1007/s10676-024-09773-726:3Online publication date: 25-Jun-2024
  • (2023)Explainable AI and Causal Understanding: Counterfactual Approaches ConsideredMinds and Machines10.1007/s11023-023-09637-x33:2(347-377)Online publication date: 1-Jun-2023
  • (2022)Putting explainable AI in context: institutional explanations for medical AIEthics and Information Technology10.1007/s10676-022-09649-824:2Online publication date: 6-May-2022

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media