Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

VitrAI: Applying Explainable AI in the Real World

  • Conference paper
  • First Online:
Intelligent Systems and Applications (IntelliSys 2021)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 295))

Included in the following conference series:

Abstract

With recent progress in the field of Explainable Artificial Intelligence (XAI) and increasing use in practice, the need for an evaluation of different XAI methods and their explanation quality in practical usage scenarios arises. For this purpose, we present VitrAI, which is a web-based service with the goal of uniformly demonstrating four different XAI algorithms in the context of three real life scenarios and evaluating their performance and comprehensibility for humans. This work highlights practical obstacles to the use of XAI methods, and also shows that various XAI algorithms are only partially consistent with each other and unsystematically meet human expectations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://spacy.io/.

  2. 2.

    http://mmlab.ie.cuhk.edu.hk/datasets/comp_cars/index.html.

  3. 3.

    https://www.kaggle.com/jsphyg/weather-dataset-rattle-package.

  4. 4.

    https://scikit-learn.org/stable/.

  5. 5.

    https://github.com/pandas-profiling/pandas-profiling.

  6. 6.

    https://scikit-learn.org/stable/modules/permutation_importance.html.

  7. 7.

    https://www.docker.com/.

  8. 8.

    https://angular.io/.

  9. 9.

    https://akveo.github.io/nebular/.

  10. 10.

    https://www.primefaces.org/primeng/.

  11. 11.

    https://www.djangoproject.com/.

  12. 12.

    https://flask.palletsprojects.com/en/1.1.x/.

  13. 13.

    https://couchdb.apache.org/.

  14. 14.

    https://www.tensorflow.org/.

  15. 15.

    https://keras.io/.

  16. 16.

    https://github.com/tensorflow/lucid.

References

  1. Alber, M., et al.: Investigate neural networks! J. Mach. Learn. Res. 20(93), 1–8 (2019)

    Google Scholar 

  2. Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques (2019)

    Google Scholar 

  3. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10, e0130140 (2015)

    Article  Google Scholar 

  4. Das, A., Rad, P.: Opportunities and challenges in explainable artificial intelligence (XAI): a survey. arXiv e-prints arXiv:2006.11371, June 2020

  5. Dhurandhar, A., et al.: Explanations based on the missing: towards contrastive explanations with pertinent negatives. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS 2018, pp. 590–601. Curran Associates Inc., Red Hook (2018)

    Google Scholar 

  6. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning (2017)

    Google Scholar 

  7. Dressel, J., Farid, H.: The accuracy, fairness, and limits of predicting recidivism. Sci. Adv. 4(1), eaao5580 (2018)

    Google Scholar 

  8. Geirhos, R., Temme, C.R.M., Rauber, J., Schütt, H.H., Bethge, M., Wichmann, F.A.: Generalisation in humans and deep neural networks. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS 2018, pp. 7549–7561. Curran Associates Inc., Red Hook (2018)

    Google Scholar 

  9. Gunning, D.: Explainable artificial intelligence (XAI) (2017)

    Google Scholar 

  10. Gurumoorthy, K.S., Dhurandhar, A., Cecchi, G.A., Aggarwal, C.C.: Efficient data representation by selecting prototypes with importance weights. In: Wang, J., Shim, K., Wu, X. (eds.) 2019 IEEE International Conference on Data Mining, ICDM 2019, Beijing, China, 8–11 November 2019, pp. 260–269. IEEE (2019)

    Google Scholar 

  11. Hooker, S., Erhan, D., Kindermans, P.J., Kim, B..: A benchmark for interpretability methods in deep neural networks. In: Wallach, H., Larochelle, H., Beygelzimer, A., Alché-Buc, F.D., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems 32, pp. 9737–9748. Curran Associates Inc. (2019)

    Google Scholar 

  12. Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach, H., Wortman Vaughan, J.: Interpreting interpretability: understanding data scientists’ use of interpretability tools for machine learning, pp. 1–14. Association for Computing Machinery, New York (2020)

    Google Scholar 

  13. Letham, B., Rudin, C., McCormick, T.H., Madigan, D., et al.: Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model. Ann. Appl. Stat. 9(3), 1350–1371 (2015)

    Article  MathSciNet  Google Scholar 

  14. Lipton, Z.C.: The mythos of model interpretability. Queue 16(3), 31–57 (2018)

    Article  Google Scholar 

  15. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems 30, pp. 4765–4774. Curran Associates Inc. (2017)

    Google Scholar 

  16. Mittelstadt, B., Russell, C., Wachter, S.: Explaining explanations in AI. FAT* 2019, pp. 279–288. Association for Computing Machinery, New York (2019)

    Google Scholar 

  17. Mohseni, S., Block, J.E., Ragan, E.D.: A human-grounded evaluation benchmark for local explanations of machine learning. arXiv e-prints arXiv:1801.05075, January 2018

  18. Montavon, G., Samek, W., Muller, K.-R.: Methods for interpreting and understanding deep neural networks. Digital Sig. Process. 73, 1–15 (2018)

    Article  MathSciNet  Google Scholar 

  19. Morichetta, A., Casas, P., Mellia, M.: Explain-it: towards explainable AI for unsupervised network traffic analysis. In: Proceedings of the 3rd ACM CoNEXT Workshop on Big DAta, Machine Learning and Artificial Intelligence for Data Communication Networks, Big-DAMA 2019, pp. 22–28. Association for Computing Machinery, New York (2019)

    Google Scholar 

  20. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 1135–1144. Association for Computing Machinery, New York (2016)

    Google Scholar 

  21. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)

    Article  Google Scholar 

  22. Saisubramanian, S., Galhotra, S., Zilberstein, S.: Balancing the tradeoff between clustering value and interpretability. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES 2020, pp. 351–357. Association for Computing Machinery, New York, New York (2020)

    Google Scholar 

  23. Schneider, J., Handali, J., Vlachos, M., Meske, C.: Deceptive AI explanations: creation and detection. arXiv e-prints arXiv:2001.07641, January 2020

  24. Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling LIME and SHAP: adversarial attacks on post hoc explanation methods. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES 2020, pp. 180–186. Association for Computing Machinery, New York (2020)

    Google Scholar 

  25. Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (XAI): towards medical XAI. IEEE Trans. Neural Netw. Learn. Syst. (2020)

    Google Scholar 

  26. Zhang, Q., Wu, Y., Zhu, S.: Interpretable convolutional neural networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8827–8836. IEEE Computer Society, Los Alamitos, June 2018

    Google Scholar 

  27. Zhou, T., Sheng, H., Howley, I.: Assessing post-hoc explainability of the BKT algorithm. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES 2020, pp. 407–413. Association for Computing Machinery, New York (2020)

    Google Scholar 

  28. Zhou, Y., Danks, D.: Different “intelligibility” for different folks. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES 2020, pp. 194–199. Association for Computing Machinery, New York (2020)

    Google Scholar 

Download references

Acknowledgment

This work was conducted together with students from the University of Stuttgart.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marc Hanussek .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hanussek, M., Kötter, F., Kintz, M., Drawehn, J. (2022). VitrAI: Applying Explainable AI in the Real World. In: Arai, K. (eds) Intelligent Systems and Applications. IntelliSys 2021. Lecture Notes in Networks and Systems, vol 295. Springer, Cham. https://doi.org/10.1007/978-3-030-82196-8_2

Download citation

Publish with us

Policies and ethics