Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1007/978-3-030-58342-2_11guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI)

Published: 08 June 2020 Publication History

Abstract

Recently, a groundswell of research has identified the use of counterfactual explanations as a potentially significant solution to the Explainable AI (XAI) problem. It is argued that (i) technically, these counterfactual cases can be generated by permuting problem-features until a class-change is found, (ii) psychologically, they are much more causally informative than factual explanations, (iii) legally, they are GDPR-compliant. However, there are issues around the finding of “good” counterfactuals using current techniques (e.g. sparsity and plausibility). We show that many commonly-used datasets appear to have few “good” counterfactuals for explanation purposes. So, we propose a new case-based approach for generating counterfactuals, using novel ideas about the counterfactual potential and explanatory coverage of a case-base. The new technique reuses patterns of good counterfactuals, present in a case-base, to generate analogous counterfactuals that can explain new problems and their solutions. Several experiments show how this technique can improve the counterfactual potential and explanatory coverage of case-bases that were previously found wanting.

References

[1]
Gunning, D.: Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), Web, vol. 2 (2017)
[2]
Gunning D and Aha DW DARPA’s explainable artificial intelligence program AI Mag. 2019 40 2 44-58
[3]
Goodman B and Flaxman S European Union regulations on algorithmic decision-making and a “right to explanation” AI Mag. 2017 38 3 50-57
[4]
Wachter S, Mittelstadt B, and Floridi L Why a right to explanation of automated decision-making does not exist in the general data protection regulation Int. Data Priv. Law 2017 7 2 76-99
[5]
Adadi A and Berrada M Peeking inside the black-box: a survey on Explainable Artificial Intelligence (XAI) IEEE Access 2018 6 52138-52160
[6]
Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, and Pedreschi D A survey of methods for explaining black box models ACM Comput. Surv. 2018 51 5 93
[7]
Miller TExplanation in artificial intelligenceArtif. Intell.20192671-3807099170
[8]
Leake, D.B.: CBR in context: the present and future. In: Case-Based Reasoning: Experiences, Lessons, and Future Directions, pp. 3–30 (1996)
[9]
Leake D and McSherry D Introduction to the special issue on explanation in case-based reasoning Artif. Intell. Rev. 2005 24 2 103-108
[10]
Sørmo F, Cassens J, and Aamodt AExplanation in case-based reasoning–perspectives and goalsArtif. Intell. Rev.2005242109-1431086.68604
[11]
Schoenborn, J.M., Althoff, K.D.: Recent trends in XAI: In: Case-Based Reasoning for the Explanation of intelligent systems (XCBR) Workshop (2019)
[12]
Lipton ZC The Mythos of model interpretability Queue 2018 16 3 30
[13]
Kenny, E.M., Keane, M.T.: Twin-systems to explain neural networks using case-based reasoning. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019, pp. 326–333 (2019)
[14]
Keane MT and Kenny EM Bach K and Marling C How case-based reasoning explains neural networks: a theoretical analysis of XAI using Post-Hoc explanation-by-example from a survey of ANN-CBR twin-systems Case-Based Reasoning Research and Development 2019 Cham Springer 155-171
[15]
Byrne, R.M.J.: The Rational Imagination. MIT Press, Cambridge (2007)
[16]
Byrne, R.M.J.: Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, pp. 6276–6282 (2019)
[17]
Wachter S, Mittelstadt B, and Russell C Counterfactual explanations without opening the black box: automated decisions and the GDPR Harv. J. Law Tech. 2018 31 841
[18]
Guidotti, R., Monreale, A., Giannotti, F., Pedreschi, D., Ruggieri S., Turini. F.: Factual and counterfactual explanations for black box decision making. IEEE Intell. Syst. 34(6), 14–23 (2019)
[19]
Smyth, B., Keane, M.T.: Remembering to forget. In: Proceedings of the 14th International Joint Conference on Artificial Intelligence, IJCAI 1995, pp. 377–382 (1995)
[20]
Smyth B and McKenna E Smyth B and Cunningham P Modelling the competence of case-bases Advances in Case-Based Reasoning 1998 Heidelberg Springer 208-220
[21]
Juarez, J.M., Craw, S., Lopez-Delgado, J.R., Campos, M.: Maintenance of case-bases: current algorithms after fifty years. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, pp. 5457–5463 (2018)
[22]
Delany SJ, Cunningham P, Doyle D, and Zamolotskikh A Muñoz-Ávila H and Ricci F Generating estimates of classification confidence for a case-based spam filter Case-Based Reasoning Research and Development 2005 Heidelberg Springer 177-190
[23]
Kumar RR, Viswanath P, and Bindu CS Nearest neighbor classifiers: a review Int. J. Comput. Intell. Res. 2017 13 2 303-311
[24]
Cunningham P, Doyle D, and Loughrey J Ashley KD and Bridge DG An evaluation of the usefulness of case-based explanation Case-Based Reasoning Research and Development 2003 Heidelberg Springer 122-130
[25]
Nugent C and Cunningham PA case-based explanation system for black-box systemsArtif. Intell. Rev.2005242163-1781083.68612
[26]
Mittelstadt, B., Russell, C., Wachter, S.: Explaining explanations in AI. In: Proceedings of Conference on Fairness, Accountability, and Transparency, FAT 2019 (2019)
[27]
Pearl, J.: Causality, Cambridge University Press, Cambridge (2000)
[28]
Sokol, K., Flach, P.: Desiderata for interpretability: explaining decision tree predictions with counterfactuals. In: AAAI 20119, Doctoral Consortium, pp. 10035–10036 (2019)
[29]
Poyiadzi, R., Sokol, K., Santos-Rodriguez, R., De Bie, T., Flach, P.: FACE: feasible and actionable counterfactual explanations. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 344–350 (2020). 10.1145/3375627.3375850
[30]
Woodward, J.: Making Things Happen. Oxford University Press, Oxford (2003)
[31]
Van Fraassen, B.C.: The Scientific Image. Oxford University Press, Oxford (1980)
[32]
Kahneman D and Miller DT Norm theory: comparing reality to its alternatives Psychol. Rev. 1986 93 2 136-153
[33]
Mueller, S.T., Hoffman, R.R., Clancey, W.J., Emery, A.K., Klein, G.: Explanation in human-AI systems. Florida Institute for Human and Machine Cognition (2019)
[34]
Dodge, J., Liao, Q.V., Zhang, Y., Bellamy, R.K., Dugan, C.: Explaining models: an empirical study of how explanations impact fairness judgment. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 275–285 (2019)
[35]
Miller, T.: Contrastive explanation. arXiv preprint arXiv:1811.03163 (2018)
[36]
Russell, C., Kusner, M.J., Loftus, J., Silva, R.: When worlds collide: integrating different counterfactual assumptions in fairness. In: Advances in Neural Information Processing Systems, pp. 6414–6423 (2017)
[37]
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you?. In: Proceedings of the 22nd ACM SIGKDD, pp. 1135–1144. ACM (2016)
[38]
Pedreschi, D., Giannotti, F., Guidotti, R., Monreale, A., Ruggieri, S., Turini, F.: Meaningful explanations of Black Box AI decision systems. In: Proceedings of AAAI 2019 (2019)
[39]
Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT 2020, pp. 607–617 (2020)
[40]
McGrath, R., et al.: Interpretable credit application predictions with counterfactual explanations. In: NIP Workshop on Challenges and Opportunities for AI in Financial Services, Montreal, Canada (2018)
[41]
Miller GA The magical number seven, plus or minus two: some limits on our capacity for processing information Psychol. Rev. 1956 63 2 81
[42]
Alvarez G and Cavanagh P The capacity of visual STM is set both by visual information load and by number of objects Psychol. Sci. 2004 15 106-111
[43]
Medin DL, Wattenmaker WD, and Hampson SE Family resemblance, conceptual cohesiveness, and category construction Cogn. Psychol. 1987 19 2 242-279
[44]
Laugel, T., Lesot, M.J., Marsala, C., Renard, X., Detyniecki, M.: The dangers of post-hoc interpretability: unjustified counterfactual explanations. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019, pp. 2801–2807 (2019)
[45]
Dua, D., Graff, C.: UCI Machine Learning Repository University of California, School of Information and Computer Science, Irvine, CA. http://archive.ics.uci.edu/ml (2019)
[46]
Lieber J, Nauer E, and Prade H Bach K and Marling C Improving analogical extrapolation using case pair competence Case-Based Reasoning Research and Development 2019 Cham Springer 251-265
[47]
Veale, T., Keane, M.T.: The competence of sub-optimal theories of structure mapping on hard analogies. In: International Joint Conference on Artificial Intelligence, pp. 232–237 (1997)
[48]
Keane MT Wess S, Althoff KD, and Richter MM Analogical asides on case-based reasoning Topics in Case-Based Reasoning 1994 Heidelberg Springer 21-32
[49]
Karimi, A.H., Barthe, G., Balle, B., Valera, I.: Model-agnostic counterfactual explanations for consequential decisions. In: Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, Palermo, Italy, vol. 108. PMLR (2020)
[50]
Sokol, K., Flach, P.: Explainability fact sheets: a framework for systematic assessment of explainable approaches. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT 2020, pp. 56–67 (2020)

Cited By

View all
  • (2025)Nullius in Explanans: an ethical risk assessment for explainable AIEthics and Information Technology10.1007/s10676-024-09800-727:1Online publication date: 1-Mar-2025
  • (2024)Understanding the User Perception and Experience of Interactive Algorithmic Recourse CustomizationACM Transactions on Computer-Human Interaction10.1145/367450331:3(1-25)Online publication date: 30-Aug-2024
  • (2024)Categorical and Continuous Features in Counterfactual Explanations of AI SystemsACM Transactions on Interactive Intelligent Systems10.1145/367390714:4(1-37)Online publication date: 20-Jun-2024
  • Show More Cited By

Index Terms

  1. Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI)
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image Guide Proceedings
        Case-Based Reasoning Research and Development: 28th International Conference, ICCBR 2020, Salamanca, Spain, June 8–12, 2020, Proceedings
        Jun 2020
        359 pages
        ISBN:978-3-030-58341-5
        DOI:10.1007/978-3-030-58342-2
        • Editors:
        • Ian Watson,
        • Rosina Weber

        Publisher

        Springer-Verlag

        Berlin, Heidelberg

        Publication History

        Published: 08 June 2020

        Author Tags

        1. CBR
        2. Explanation
        3. XAI
        4. Counterfactuals
        5. Contrastive

        Qualifiers

        • Article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 12 Feb 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2025)Nullius in Explanans: an ethical risk assessment for explainable AIEthics and Information Technology10.1007/s10676-024-09800-727:1Online publication date: 1-Mar-2025
        • (2024)Understanding the User Perception and Experience of Interactive Algorithmic Recourse CustomizationACM Transactions on Computer-Human Interaction10.1145/367450331:3(1-25)Online publication date: 30-Aug-2024
        • (2024)Categorical and Continuous Features in Counterfactual Explanations of AI SystemsACM Transactions on Interactive Intelligent Systems10.1145/367390714:4(1-37)Online publication date: 20-Jun-2024
        • (2024)Post-hoc vs ante-hoc explanationsCognitive Systems Research10.1016/j.cogsys.2024.10124386:COnline publication date: 18-Jul-2024
        • (2024)Navigating the Landscape of Case Fidelity and Competence in Case-Based ReasoningArtificial Intelligence XLI10.1007/978-3-031-77915-2_17(235-249)Online publication date: 17-Dec-2024
        • (2024)Even-Ifs from If-Onlys: Are the Best Semi-factual Explanations Found Using Counterfactuals as Guides?Case-Based Reasoning Research and Development10.1007/978-3-031-63646-2_3(33-49)Online publication date: 1-Jul-2024
        • (2024)Counterfactual-Based Synthetic Case GenerationCase-Based Reasoning Research and Development10.1007/978-3-031-63646-2_25(388-403)Online publication date: 1-Jul-2024
        • (2024)Explaining Multiple Instances Counterfactually:User Tests of Group-Counterfactuals for XAICase-Based Reasoning Research and Development10.1007/978-3-031-63646-2_14(206-222)Online publication date: 1-Jul-2024
        • (2023)The Privacy Issue of Counterfactual Explanations: Explanation Linkage AttacksACM Transactions on Intelligent Systems and Technology10.1145/360848214:5(1-24)Online publication date: 11-Aug-2023
        • (2023)Categorical and Continuous Features in Counterfactual Explanations of AI SystemsProceedings of the 28th International Conference on Intelligent User Interfaces10.1145/3581641.3584090(171-187)Online publication date: 27-Mar-2023
        • Show More Cited By

        View Options

        View options

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media