Abstract
Counterfactual explanations are an important solution to the Explainable AI (XAI) problem, but good, “native” counterfactuals can be hard to come by. Hence, the popular methods generate synthetic counterfactuals using “blind” perturbation, by manipulating feature values to elicit a class change. However, this strategy has other problems, notably a tendency to generate invalid data points that are out-of-distribution or that involve feature-values that do not naturally occur in a given domain. Instance-guided and case-based methods address these problems by grounding counterfactual generation in the dataset or case base, producing synthetic counterfactuals from naturally-occurring features, and guaranteeing the reuse of valid feature values. Several instance-guided methods have been proposed, but they too have their shortcomings. Some only approximate grounding in the dataset, or do not readily generalise to multi-class settings, or are limited in their ability to generate alternative counterfactuals. This paper extends recent case-based approaches by presenting a novel, general-purpose, case-based solution for counterfactual generation to address these shortcomings. We report a series of experiments to systematically explore parametric variations on common datasets, to establish the conditions for optimal performance, beyond the state-of-the-art in instance-guided methods for counterfactual XAI.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
For now, we drop the d without loss of generality.
- 2.
SciKitLearn, with deviance loss, a learning rate of 0.1, and 100 boosting stages.
- 3.
As this is an instance-based technique the out-of-distribution metrics sometimes used in evaluating perturbation-based techniques are not germane.
References
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)
Byrne, R.M.: The Rational Imagination: How People Create Alternatives to Reality. MIT Press, Cambridge (2007)
Byrne, R.M.: Counterfactuals in Explainable Artificial Intelligence (XAI). In: IJCAI-19, pp. 6276–6282 (2019)
Chou, Y.L., Moreira, C., Bruza, P., Ouyang, C., Jorge, J.: Counterfactuals and causability in explainable artificial intelligence: theory, algorithms, and applications. Inf. Fusion 81, 59–83 (2022)
Dandl, S., Molnar, C., Binder, M., Bischl, B.: Multi-objective counterfactual explanations. arXiv preprint arXiv:2004.11165 (2020)
Dasarathy, B.V.: Minimal consistent set (MCS) identification for optimal nearest neighbor decision systems design. IEEE Trans. Syst. Man Cybern. 24(3), 511–517 (1994)
Del Ser, J., Barredo-Arrieta, A., Díaz-Rodríguez, N., Herrera, F., Holzinger, A.: Exploring the trade-off between plausibility, change intensity and adversarial power in counterfactual explanations using multi-objective optimization. arXiv preprint arXiv:2205.10232 (2022)
Delaney, E., Greene, D., Keane, M.T.: Instance-based counterfactual explanations for time series classification. In: Sánchez-Ruiz, A.A., Floyd, M.W. (eds.) ICCBR 2021. LNCS (LNAI), vol. 12877, pp. 32–47. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86957-1_3
Delaney, E., Greene, D., Keane, M.T.: Uncertainty estimation and out-of-distribution detection for counterfactual explanations. In: ICML21 Workshop on Algorithmic Recourse. arXiv-2107 (2021)
Doyle, D., Cunningham, P., Bridge, D., Rahman, Y.: Explanation oriented retrieval. In: Funk, P., González Calero, P.A. (eds.) ECCBR 2004. LNCS (LNAI), vol. 3155, pp. 157–168. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-28631-8_13
Förster, M., Hühn, P., Klier, M., Kluge, K.: Capturing users’ reality: a novel approach to generate coherent counterfactual explanations. In: Proceedings of the 54th Hawaii International Conference on System Sciences, p. 1274 (2021)
Förster, M., Klier, M., Kluge, K., Sigler, I.: Fostering human agency: a process for the design of user-centric XAI systems. In: Proceedings of the International Conference on Information Systems (ICIS) (2020)
Friedman, J.H.: Stochastic gradient boosting. Comput. Stat. Data Anal. 38(4), 367–378 (2002)
Gerstenberg, T., Goodman, N.D., Lagnado, D.A., Tenenbaum, J.B.: A counterfactual simulation model of causal judgments for physical events. Psychol. Rev. (2021)
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations. In: Proceedings of the IEEE 5th International Conference on Data Science and Advanced Analytics, pp. 80–89. IEEE (2018)
Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a “right to explanation". AI Mag. 38(3), 50–57 (2017)
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2018)
Gunning, D.: Explainable Artificial Intelligence (XAI). DARPA, Web 2(2) (2017)
Karimi, A.H., von Kügelgen, J., Schölkopf, B., Valera, I.: Algorithmic recourse under imperfect causal knowledge. In: NIPS 33 (2020)
Keane, M.T., Kenny, E.M., Delaney, E., Smyth, B.: If only we had better counterfactual explanations. In: Proceedings of the 30th International Joint Conference on Artificial Intelligence, IJCAI-21, pp. 4466–4474 (2021). https://doi.org/10.24963/ijcai.2021/609
Keane, M.T., Smyth, B.: Good counterfactuals and where to find them: a case-based technique for generating counterfactuals for explainable AI (XAI). In: Watson, I., Weber, R. (eds.) ICCBR 2020. LNCS (LNAI), vol. 12311, pp. 163–178. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58342-2_11
Kenny, E.M., Keane, M.T.: Twin-systems to explain artificial neural networks using case-based reasoning. In: IJCAI-19, pp. 2708–2715 (2019)
Kenny, E.M., Keane, M.T.: Explaining deep learning using examples: optimal feature weighting methods for twin systems using post-hoc, explanation-by-example in XAI. Knowl.-Based Syst. 233, 107530 (2021)
Kusner, M.J., Loftus, J.R.: The long road to fairer algorithms. Nature (2020)
Larsson, S., Heintz, F.: Transparency in artificial intelligence. Internet Policy Rev. 9(2) (2020)
Laugel, T., Lesot, M.J., Marsala, C., Renard, X., Detyniecki, M.: The dangers of post-hoc interpretability. In: IJCAI-19, pp. 2801–2807. AAAI Press (2019)
McGrath, R., et al.: Interpretable credit application predictions with counterfactual explanations. In: NIPS Workshop on Challenges and Opportunities for AI in Financial Services (2018)
McKenna, E., Smyth, B.: Competence-guided case-base editing techniques. In: Blanzieri, E., Portinale, L. (eds.) EWCBR 2000. LNCS, vol. 1898, pp. 186–197. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-44527-7_17
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 607–617 (2020)
Muhammad, K.I., Lawlor, A., Smyth, B.: A live-user study of opinionated explanations for recommender systems. In: IUI, pp. 256–260 (2016)
Poyiadzi, R., Sokol, K., Santos-Rodriguez, R., De Bie, T., Flach, P.: Face: feasible and actionable counterfactual explanations. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 344–350 (2020)
Ramon, Y., Martens, D., Provost, F., Evgeniou, T.: A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C. Adv. Data Anal. Classif. 14(4), 801–819 (2020). https://doi.org/10.1007/s11634-020-00418-3
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you? In: Proceedings of the ACM SIGKDD, pp. 1135–1144 (2016)
Russell, C., Kusner, M.J., Loftus, J., Silva, R.: When worlds collide: integrating different counterfactual assumptions in fairness. In: NIPS, pp. 6414–6423 (2017)
Verma, S., Dickerson, J., Hines, K.: Counterfactual explanations for machine learning: a review. arXiv:2010.10596 (2020)
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box. Harv. J. Law Tech. 31, 841 (2017)
Acknowledgements
Supported by Science Foundation Ireland via the Insight SFI Research Centre for Data Analytics (12/RC/2289) and with the Department of Agriculture, Food and Marine via the VistaMilk SFI Research Centre (16/RC/3835).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Smyth, B., Keane, M.T. (2022). A Few Good Counterfactuals: Generating Interpretable, Plausible and Diverse Counterfactual Explanations. In: Keane, M.T., Wiratunga, N. (eds) Case-Based Reasoning Research and Development. ICCBR 2022. Lecture Notes in Computer Science(), vol 13405. Springer, Cham. https://doi.org/10.1007/978-3-031-14923-8_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-14923-8_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-14922-1
Online ISBN: 978-3-031-14923-8
eBook Packages: Computer ScienceComputer Science (R0)