Abstract
With recent progress in the field of Explainable Artificial Intelligence (XAI) and increasing use in practice, the need for an evaluation of different XAI methods and their explanation quality in practical usage scenarios arises. For this purpose, we present VitrAI, which is a web-based service with the goal of uniformly demonstrating four different XAI algorithms in the context of three real life scenarios and evaluating their performance and comprehensibility for humans. This work highlights practical obstacles to the use of XAI methods, and also shows that various XAI algorithms are only partially consistent with each other and unsystematically meet human expectations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
References
Alber, M., et al.: Investigate neural networks! J. Mach. Learn. Res. 20(93), 1–8 (2019)
Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques (2019)
Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10, e0130140 (2015)
Das, A., Rad, P.: Opportunities and challenges in explainable artificial intelligence (XAI): a survey. arXiv e-prints arXiv:2006.11371, June 2020
Dhurandhar, A., et al.: Explanations based on the missing: towards contrastive explanations with pertinent negatives. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS 2018, pp. 590–601. Curran Associates Inc., Red Hook (2018)
Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning (2017)
Dressel, J., Farid, H.: The accuracy, fairness, and limits of predicting recidivism. Sci. Adv. 4(1), eaao5580 (2018)
Geirhos, R., Temme, C.R.M., Rauber, J., Schütt, H.H., Bethge, M., Wichmann, F.A.: Generalisation in humans and deep neural networks. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS 2018, pp. 7549–7561. Curran Associates Inc., Red Hook (2018)
Gunning, D.: Explainable artificial intelligence (XAI) (2017)
Gurumoorthy, K.S., Dhurandhar, A., Cecchi, G.A., Aggarwal, C.C.: Efficient data representation by selecting prototypes with importance weights. In: Wang, J., Shim, K., Wu, X. (eds.) 2019 IEEE International Conference on Data Mining, ICDM 2019, Beijing, China, 8–11 November 2019, pp. 260–269. IEEE (2019)
Hooker, S., Erhan, D., Kindermans, P.J., Kim, B..: A benchmark for interpretability methods in deep neural networks. In: Wallach, H., Larochelle, H., Beygelzimer, A., Alché-Buc, F.D., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems 32, pp. 9737–9748. Curran Associates Inc. (2019)
Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach, H., Wortman Vaughan, J.: Interpreting interpretability: understanding data scientists’ use of interpretability tools for machine learning, pp. 1–14. Association for Computing Machinery, New York (2020)
Letham, B., Rudin, C., McCormick, T.H., Madigan, D., et al.: Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model. Ann. Appl. Stat. 9(3), 1350–1371 (2015)
Lipton, Z.C.: The mythos of model interpretability. Queue 16(3), 31–57 (2018)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems 30, pp. 4765–4774. Curran Associates Inc. (2017)
Mittelstadt, B., Russell, C., Wachter, S.: Explaining explanations in AI. FAT* 2019, pp. 279–288. Association for Computing Machinery, New York (2019)
Mohseni, S., Block, J.E., Ragan, E.D.: A human-grounded evaluation benchmark for local explanations of machine learning. arXiv e-prints arXiv:1801.05075, January 2018
Montavon, G., Samek, W., Muller, K.-R.: Methods for interpreting and understanding deep neural networks. Digital Sig. Process. 73, 1–15 (2018)
Morichetta, A., Casas, P., Mellia, M.: Explain-it: towards explainable AI for unsupervised network traffic analysis. In: Proceedings of the 3rd ACM CoNEXT Workshop on Big DAta, Machine Learning and Artificial Intelligence for Data Communication Networks, Big-DAMA 2019, pp. 22–28. Association for Computing Machinery, New York (2019)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 1135–1144. Association for Computing Machinery, New York (2016)
Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)
Saisubramanian, S., Galhotra, S., Zilberstein, S.: Balancing the tradeoff between clustering value and interpretability. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES 2020, pp. 351–357. Association for Computing Machinery, New York, New York (2020)
Schneider, J., Handali, J., Vlachos, M., Meske, C.: Deceptive AI explanations: creation and detection. arXiv e-prints arXiv:2001.07641, January 2020
Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling LIME and SHAP: adversarial attacks on post hoc explanation methods. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES 2020, pp. 180–186. Association for Computing Machinery, New York (2020)
Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (XAI): towards medical XAI. IEEE Trans. Neural Netw. Learn. Syst. (2020)
Zhang, Q., Wu, Y., Zhu, S.: Interpretable convolutional neural networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8827–8836. IEEE Computer Society, Los Alamitos, June 2018
Zhou, T., Sheng, H., Howley, I.: Assessing post-hoc explainability of the BKT algorithm. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES 2020, pp. 407–413. Association for Computing Machinery, New York (2020)
Zhou, Y., Danks, D.: Different “intelligibility” for different folks. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES 2020, pp. 194–199. Association for Computing Machinery, New York (2020)
Acknowledgment
This work was conducted together with students from the University of Stuttgart.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Hanussek, M., Kötter, F., Kintz, M., Drawehn, J. (2022). VitrAI: Applying Explainable AI in the Real World. In: Arai, K. (eds) Intelligent Systems and Applications. IntelliSys 2021. Lecture Notes in Networks and Systems, vol 295. Springer, Cham. https://doi.org/10.1007/978-3-030-82196-8_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-82196-8_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-82195-1
Online ISBN: 978-3-030-82196-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)