Abstract
Recent efforts have uncovered various methods for providing explanations that can help interpret the behavior of machine learning programs. Exact explanations with a rigorous logical foundation provide valid and complete explanations, but they have an epistemological problem: they may be too complex for humans to understand and too expensive to compute even with automated reasoning methods. Interpretability requires good explanations that humans can grasp and can compute.
We take an important step toward specifying what good explanations are by analyzing the epistemically accessible and pragmatic aspects of explanations. We characterize sufficiently good, or fair and adequate, explanations in terms of counterfactuals and what we call the conundra of the explainee, the agent that requested the explanation. We provide a correspondence between logical and mathematical formulations for counterfactuals to examine the partiality of counterfactual explanations that can hide biases; we define fair and adequate explanations in such a setting. We then provide formal results about the algorithmic complexity of fair and adequate explanations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
[10] provide a superficially similar picture to the pragmatic one we present, but their aim is rather different, to provide a semantics for argumentation frameworks. For us the pragmatic aspect of explanations is better explained via a game theoretic framework; see below.
- 2.
We are implicitly assuming that \(\hat{f}\) is too complex or opaque for its behaviour to be analyzed statically.
- 3.
By increasing the number of literals we can simulate non binary values, so this is not really a limitation as long as the features are finite.
- 4.
See [18] for some experimental evidence of this.
- 5.
In fact, we only assume a finite set of finitely valued features, since an n-valued feature is definable with n Boolean valued features. By complicating the language and logic [7], we can have probability estimates on literals and so encode continuous feature spaces.
- 6.
- 7.
- 8.
Of course \(\mathcal{E}\) might want to know whether her beliefs matched the bank’s reasons for denying her a loan, but that’s a different question—and in particular it’s not a why question.
- 9.
Perhaps \(\mathcal{E}\) is also mistaken about or has an incomplete grasph of f or if not, she is mistaken about how \(\hat{f}\) differs from f). But we will not pursue this here.
References
Achinstein, P.: The Nature of Explanation. Oxford University Press, Oxford (1980)
Amershi, S., Cakmak, M., Knox, W.B., Kulesza, T.: Power to the people: the role of humans in interactive machine learning. AI Mag. 35(4), 105–120 (2014)
Asher, N., Paul, S.: Strategic conversation under imperfect information: epistemic message exchange Games. Logic, Lang. Inf. 27(4), 343–385 (2018)
Bachoc, F., Gamboa, F., Halford, M., Loubes, J.M., Risser, L.: Entropic variable projection for explainability and intepretability. arXiv preprint arXiv:1810.07924 (2018)
Bromberger, S.: An approach to explanation. In: Butler, R. (ed.) Analytical Philsophy, pp. 72–105. Oxford University Press, Oxford (1962)
Chang, C.C., Keisler, H.J.: Model theory. Elsevier (1990)
De Raedt, L., Dumančić, S., Manhaeve, R., Marra, G.: From statistical relational to neuro-symbolic artificial intelligence. arXiv preprint arXiv:2003.08316 (2020)
Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)
Dube, S.: High dimensional spaces, deep learning and adversarial examples. arXiv preprint arXiv:1801.00634 (2018)
Fan, X., Toni, F.: On computing explanations in argumentation. In: Bonet, B., Koenig, S. (eds.) Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, pp. 1496–1502. AAAI Press (2015)
Friedrich, G., Zanker, M.: A taxonomy for generating explanations in recommender systems. AI Mag. 32(3), 90–98 (2011)
Gärdenfors, P., Makinson, D.: Revisions of knowledge systems using epistemic entrenchment. In: Vardi, M.Y. (ed.) Proceedings of the Second Conference on Theoretical Aspects of Reasoning about Knowledge, pp. 83–95. Morgan Kaufmann, San Francisco (1988)
Ginsberg, M.L.: Counterfactuals. Artif. Intell. 30(1), 35–79 (1986)
Hempel, C.G.: Aspects of Scientific Explanation. Free Press, New York (1965)
Holzinger, A., Carrington, A., Müller, H.: Measuring the quality of explanations: the system causability scale (scs). KI-Künstliche Intelligenz, pp. 1–6 (2020)
Holzinger, A., Malle, B., Saranti, A., Pfeifer, B.: Towards multi-modal causability with graph neural networks enabling information fusion for explainable ai. Inf. Fusion 71, 28–37 (2021)
Holzinger, A., Plass, M., Kickmeier-Rust, M., Holzinger, K., Crişan, G.C., Pintea, C.M., Palade, V.: Interactive machine learning: experimental evidence for the human in the algorithmic loop. Appl. Intell. 49(7), 2401–2414 (2019)
Ignatiev, A., Narodytska, N., Asher, N., Marques-Silva, J.: On relating “why?” and “why not?” explanations. In: Proceedings of AI*IA 2020 (2020)
Ignatiev, A., Narodytska, N., Marques-Silva, J.: On relating explanations and adversarial examples. In: Advances in Neural Information Processing Systems (2019)
Johnson, D.S., Papadimitriou, C.H., Yannakakis, M.: How easy is local search? J. Comput. Syst. Sci. 37(1), 79–100 (1988)
Junker, U.: Preferred explanations and relaxations for over-constrained problems. In: AAAI-2004 (2004)
Karimi, A.H., Barthe, G., Balle, B., Valera, I.: Model-agnostic counterfactual explanations for consequential decisions. In: International Conference on Artificial Intelligence and Statistics, pp. 895–905. PMLR (2020)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016)
Kusner, M.J., Loftus, J., Russell, C., Silva, R.: Counterfactual fairness. In: Advances in Neural Information Processing Systems, pp. 4066–4076 (2017)
Laugel, T., Lesot, M.-J., Marsala, C., Renard, X., Detyniecki, M.: Unjustified classification regions and counterfactual explanations in machine learning. In: Brefeld, U., Fromont, E., Hotho, A., Knobbe, A., Maathuis, M., Robardet, C. (eds.) ECML PKDD 2019. LNCS (LNAI), vol. 11907, pp. 37–54. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-46147-8_3
Lewis, D.: Causation. J. Philos. 70(17), 556–567 (1973)
Lundberg, S.M., Lee, S.: A unified approach to interpreting model predictions. In: NIPS, pp. 4765–4774 (2017)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Molnar, C.: Interpretable machine learning. Lulu. com (2019)
Murdoch, W.J., Singh, C., Kumbier, K., Abbasi-Asl, R., Yu, B.: Definitions, methods, and applications in interpretable machine learning. Proc. Natl. Acad. Sci. 116(44), 22071–22080 (2019)
Papadimitriou, C.H., Schäffer, A.A., Yannakakis, M.: On the complexity of local search. In: Proceedings of the Twenty-Second Annual ACM Symposium on Theory of Computing, pp. 438–445 (1990)
Pearl, J.: System Z: a natural ordering of defaults with tractable applications to nonmonotonic reasoning. In: Proceedings of the 3rd Conference on Theoretical Aspects of Reasoning about Knowledge (TARK 1990), pp. 121–135 (1990)
Peyré, G., et al.: Computational optimal transport: with applications to data science. Found. Trends Mach. Learn. 11(5–6), 355–607 (2019)
Ribeiro, M.T., Singh, S., Guestrin, C.: why should i trust you?: explaining the predictions of any classifier. In: KDD, pp. 1135–1144 (2016)
Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI, pp. 1527–1535 (2018)
Salzberg, S.: Distance metrics for instance-based learning. In: Ras, Z.W., Zemankova, M. (eds.) ISMIS 1991. LNCS, vol. 542, pp. 399–408. Springer, Heidelberg (1991). https://doi.org/10.1007/3-540-54563-8_103
Spence, A.M.: Job market signaling. J. Econ. 87(3), 355–374 (1973)
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the gpdr. Harv. JL Tech. 31, 841 (2017)
Williamson, T.: First-order logics for comparative similarity. Notre Dame J. Formal Logic 29(4) (1988)
Younes, L.: Diffeomorphic learning. arXiv.1806.01240 (2019)
Acknowledgement
We thank the ANR PRCI grant SLANT, the ICT 38 EU grant COALA and the 3IA Institute ANITI funded by the ANR-19-PI3A-0004 grant for research support. We alo thank the reviewers for their insightful comments.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 IFIP International Federation for Information Processing
About this paper
Cite this paper
Asher, N., Paul, S., Russell, C. (2021). Fair and Adequate Explanations. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds) Machine Learning and Knowledge Extraction. CD-MAKE 2021. Lecture Notes in Computer Science(), vol 12844. Springer, Cham. https://doi.org/10.1007/978-3-030-84060-0_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-84060-0_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-84059-4
Online ISBN: 978-3-030-84060-0
eBook Packages: Computer ScienceComputer Science (R0)