Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3580305.3599181acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
abstract
Free access

Hands-on Tutorial: "Explanations in AI: Methods, Stakeholders and Pitfalls"

Published: 04 August 2023 Publication History

Abstract

While using vast amounts of training data and sophisticated models has enhanced the predictive performance of Machine Learning (ML) and Artificial Intelligence (AI) solutions, it has also led to an increased difficulty in comprehending their predictions. The ability to explain predictions is often one of the primary desiderata for adopting AI and ML solutions [6, 13]. The desire for explainability has led to a rapidly growing body of literature on explainable AI (XAI) and has also resulted in the development of hundreds of XAI methods targeting different domains (e.g., finance, healthcare), applications (e.g., model debugging, actionable recourse), data modalities (e.g., tabular data, images), models (e.g., transformers, convolutional neural networks) and stakeholders (e.g., end-users, regulatory authorities, data scientists). The goal of this tutorial is to present a comprehensive overview of the XAI field to the participants. As a hands-on tutorial, we will showcase state-of-the-art methods that can be used for different data modalities and contexts to extract the right abstractions for interpretation. We will also cover common pitfalls when using explanations, e.g., misrepresentation, and lack of robustness of explanations.

References

[1]
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity Checks for Saliency Maps. In NeurIPS.
[2]
David Alvarez Melis and Tommi Jaakkola. 2018. Towards Robust Interpretability with Self-explaining Neural Networks. NeurIPS.
[3]
Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020. A Diagnostic Study of Explainability Techniques for Text Classification. In EMNLP.
[4]
Maximilian Augustin, Alexander Meinke, and Matthias Hein. 2020. Adversarial Robustness on In-and Out-distribution Improves Explainability. In ECCV.
[5]
Solon Barocas, Andrew Selbst, and Manish Raghavan. 2020. The Hidden Assumptions Behind Counterfactual Explanations and Principal Reasons. In FAccT.
[6]
Bryan Casey, Ashkon Farhangi, and Roland Vogl. 2019. Rethinking Explainable Machines: The GDPR's 'Right to Explanation' Debate and the Rise of Algorithmic Audits in Enterprise. Berkeley Tech. LJ.
[7]
Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan K Su. 2019. This Looks Like That: Deep Learning for Interpretable Image Recognition. NeurIPS.
[8]
Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A Benchmark to Evaluate Rationalized NLP Models. In ACL.
[9]
Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. arXiv:1702.08608 [cs, stat].
[10]
Andrew Elliott, Stephen Law, and Chris Russell. 2021. Explaining Classifiers Using Adversarial Perturbations on the Perceptual Ball. In CVPR.
[11]
K. Gade, S. Geyik, K. Kenthapadi, V. Mithal, and A. Taly. 2019--2020. Explainable AI in Industry: Practical Challenges and Lessons Learned. https://tinyurl.com /2wjejapt and https://tinyurl.com/5hcpve38 and https://tinyurl.com/mry2586c. (2019--2020).
[12]
L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. Specter, and L. Kagal. 2018. Explaining Explanations: An Overview of Interpretability of Machine Learning. In DSAA.
[13]
L. Goasduff. 2019. 3 Barriers to AI Adoption. https://www.gartner.com/smarte rwithgartner/3-barriers-to-ai-adoption. (2019).
[14]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys.
[15]
Peter Hase and Mohit Bansal. 2020. Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior? In ACL.
[16]
Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In NAACL.
[17]
K. Kenthapadi, B. Packer, M. Sameki, and N. Sephus. 2021. Responsible AI in Industry: Practical Challenges and Lessons Learned. https://icml.cc/virtual/202 1/tutorial/10841. (2021).
[18]
Pang Wei Koh and Percy Liang. 2017. Understanding Black-box Predictions via Influence Functions. In ICML.
[19]
I Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, and Sorelle Friedler. 2020. Problems with Shapley-value-based Explanations as Feature Importance Measures. In ICML.
[20]
H. Lakkaraju, J. Adebayo, and S. Singh. 2020. Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities. https://explainml-tutorial.github.io/neurips20. (2020).
[21]
F. Lecue, K. Gade, S. Geyik, K. Kenthapadi, V. Mithal, A. Taly, R. Guidotti, and P. Minervini. 2020. Explainable AI: Foundations, Industrial Applications, Practical Challenges, and Lessons Learned. https://xaitutorial2020.github.io/. (2020).
[22]
Zachary C. Lipton. 2018. The Mythos of Model Interpretability. Communications of the ACM.
[23]
Scott M. Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In NeurIPS (NIPS'17). Red Hook, NY, USA.
[24]
Tim Miller, Piers Howe, and Liz Sonenberg. 2017. Explainable AI: Beware of Inmates Running the Asylum. arXiv preprint arXiv:1712.00547.
[25]
Brent Mittelstadt, Chris Russell, and Sandra Wachter. 2019. Explaining Expla-nations in AI. In FAT*, 279--288.
[26]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?" Explaining the Predictions of Any Classifier. In KDD.
[27]
Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: visual explanations from deep networks via gradient-based localization. In ICCV.
[28]
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic Attribution for Deep Networks. In ICML.
[29]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. Harv. JL & Tech.
[30]
Muhammad Bilal Zafar, Michele Donini, Dylan Slack, Cédric Archambeau, Sanjiv Das, and Krishnaram Kenthapadi. 2021. On the Lack of Robust Interpretability of Neural Text Classifiers. Findings of the ACL.

Index Terms

  1. Hands-on Tutorial: "Explanations in AI: Methods, Stakeholders and Pitfalls"

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      KDD '23: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
      August 2023
      5996 pages
      ISBN:9798400701030
      DOI:10.1145/3580305
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 04 August 2023

      Check for updates

      Author Tags

      1. bias
      2. explainability
      3. quality of explanations
      4. trustworthiness
      5. xai

      Qualifiers

      • Abstract

      Conference

      KDD '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 340
        Total Downloads
      • Downloads (Last 12 months)182
      • Downloads (Last 6 weeks)5
      Reflects downloads up to 10 Nov 2024

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media