Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3635138.3654762acmconferencesArticle/Chapter ViewAbstractPublication PagespodsConference Proceedingsconference-collections
research-article

A Data Management Approach to Explainable AI

Published: 09 June 2024 Publication History
  • Get Citation Alerts
  • Abstract

    In recent years, there has been a growing interest in developing methods to explain individual predictions made by machine learning models. This has led to the development of various notions of explanation and scores to justify a model's classification. However, instead of struggling with the increasing number of such notions, one can turn to an old tradition in databases and develop a declarative query language for interpretability tasks, which would allow users to specify and test their own explainability queries. Not surprisingly, logic is a suitable declarative language for this task, as it has a well-understood syntax and semantics, and there are many tools available to study its expressiveness and the complexity of the query evaluation problem. In this talk, we will discuss some recent work on developing such a logic for model interpretability.

    References

    [1]
    Antoine Amarilli, Marcelo Arenas, YooJung Choi, Mikaël Monet, Guy Van den Broeck, and Benjie Wang. 2024. A Circus of Circuits: Connections Between Decision Diagrams, Circuits, and Automata. arXiv preprint arXiv:2404.09674 (2024).
    [2]
    Marcelo Arenas, Daniel Baez, Pablo Barceló, Jorge Pé rez, and Bernardo Subercaseaux. 2021. Foundations of Symbolic Languages for Model Interpretability. In NeurIPS 2021. 11690--11701.
    [3]
    Marcelo Arenas, Pablo Barceló, Miguel Romero Orth, and Bernardo Subercaseaux. 2022. On Computing Probabilistic Explanations for Decision Trees. In Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Vol. 35. Curran Associates, Inc., 28695--28707. https://proceedings.neurips.cc/paper_files/paper/2022/file/b8963f6a0a72e686dfa98ac3e7260f73-Paper-Conference.pdf
    [4]
    Pablo Barceló, Mikaël Monet, Jorge Pérez, and Bernardo Subercaseaux. 2020. Model Interpretability through the lens of Computational Complexity. In Advances in Neural Information Processing Systems, Vol. 33. Curran Associates, Inc., 15487--15498. https://proceedings.neurips.cc/paper/2020/hash/b1adda14824f50ef24ff1c05bb66faf3-Abstract.html
    [5]
    Peter Buneman, Sanjeev Khanna, and Wang Chiew Tan. 2001. Why and Where: A Characterization of Data Provenance. In Database Theory - ICDT 2001, 8th International Conference (Lecture Notes in Computer Science, Vol. 1973). Springer, 316--330.
    [6]
    Peter Buneman and Wang-Chiew Tan. 2018. Data Provenance: What next? SIGMOD Rec., Vol. 47, 3 (2018), 5--16.
    [7]
    Oana-Maria Camburu, Eleonora Giunchiglia, Jakob N. Foerster, Thomas Lukasiewicz, and Phil Blunsom. 2019. Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods. CoRR, Vol. abs/1910.02065 (2019).
    [8]
    Hei Chan and Adnan Darwiche. 2003. Reasoning about Bayesian Network Classifiers. In UAI '03, Proceedings of the 19th Conference in Uncertainty in Artificial Intelligence. Morgan Kaufmann, 107--115.
    [9]
    Y Choi, Antonio Vergari, and Guy Van den Broeck. 2020. Probabilistic circuits: A unifying framework for tractable probabilistic models. UCLA. URL: https://starai.cs.ucla.edu/papers/ProbCirc20.pdf (2020), 6.
    [10]
    Karine Chubarian and Gyö rgy Turá n. 2020. Interpretability of Bayesian Network Classifiers: OBDD Approximation and Polynomial Threshold Functions. In International Symposium on Artificial Intelligence and Mathematics.
    [11]
    Adnan Darwiche and Auguste Hirth. 2020. On the Reasons Behind Decisions. In ECAI. 712--720. https://doi.org/10.3233/FAIA200158
    [12]
    Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. arxiv: 1702.08608 [stat.ML]
    [13]
    Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael A. Specter, and Lalana Kagal. 2018. Explaining Explanations: An Overview of Interpretability of Machine Learning. In 5th IEEE International Conference on Data Science and Advanced Analytics, DSAA 2018. IEEE, 80--89.
    [14]
    Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2019. A Survey of Methods for Explaining Black Box Models. ACM Comput. Surv., Vol. 51, 5 (2019), 93:1--93:42.
    [15]
    Xuanxiang Huang, Martin C. Cooper, António Morgado, Jordi Planes, and João Marques-Silva. 2023. Feature Necessity & Relevancy in ML Classifier Explanations. In ETAPS. 167--186. https://doi.org/10.1007/978--3-031--30823--9_9
    [16]
    Xuanxiang Huang and Jo a o Marques-Silva. 2023. The Inadequacy of Shapley Values for Explainability. CoRR, Vol. abs/2302.08160 (2023).
    [17]
    Alexey Ignatiev. 2020. Towards Trustable Explainable AI. In IJCAI, Christian Bessiere (Ed.). ijcai.org, 5154--5158.
    [18]
    Alexey Ignatiev, Nina Narodytska, and Jo a o Marques-Silva. 2019a. Abduction-Based Explanations for Machine Learning Models. In AAAI. AAAI Press, 1511--1519.
    [19]
    Alexey Ignatiev, Nina Narodytska, and Joao Marques-Silva. 2019b. On Validating, Repairing and Refining Heuristic ML Explanations. CoRR, Vol. abs/1907.02509 (2019).
    [20]
    Alexey Ignatiev and Jo a o P. Marques Silva. 2021. SAT-Based Rigorous Explanations for Decision Lists. In SAT (LNCS, Vol. 12831), Chu-Min Li and Felip Manyà (Eds.). Springer, 251--269.
    [21]
    Yacine Izza, Alexey Ignatiev, and Jo a o Marques-Silva. 2020. On Explaining Decision Trees. CoRR, Vol. abs/2010.11034 (2020).
    [22]
    Yacine Izza, Alexey Ignatiev, and Jo a o Marques-Silva. 2022. On Tackling Explanation Redundancy in Decision Trees. J. Artif. Intell. Res., Vol. 75 (2022), 261--321.
    [23]
    Yacine Izza and Jo a o Marques-Silva. 2021. On Explaining Random Forests with SAT. In IJCAI, Zhi-Hua Zhou (Ed.). ijcai.org, 2584--2591.
    [24]
    I. Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, and Sorelle A. Friedler. 2020. Problems with Shapley-value-based explanations as feature importance measures. In ICML. 5491--5500.
    [25]
    Jo a o Marques-Silva. 2022. Logic-Based Explainability in Machine Learning. CoRR, Vol. abs/2211.00541 (2022).
    [26]
    Joao Marques-Silva and Alexey Ignatiev. 2023. No silver bullet: interpretable ML models must be explained. Frontiers in Artificial Intelligence, Vol. 6 (Apr 2023), 1128212. https://doi.org/10.3389/frai.2023.1128212
    [27]
    Christoph Molnar. 2022. Interpretable Machine Learning. https://christophm.github.io/interpretable-ml-book
    [28]
    Marco Tú lio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-Precision Model-Agnostic Explanations. In AAAI. 1527--1535.
    [29]
    Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence, Vol. 1, 5 (2019), 206--215.
    [30]
    Andy Shih, Arthur Choi, and Adnan Darwiche. 2018. A symbolic approach to explaining Bayesian network classifiers. arXiv preprint arXiv:1805.03364 (2018).
    [31]
    Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. 2020. Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods. In AIES. 180--186.
    [32]
    Moshe Y. Vardi. 1982. The Complexity of Relational Query Languages (Extended Abstract). In Proceedings of the 14th Annual ACM Symposium on Theory of Computing. ACM, 137--146.
    [33]
    Jinqiang Yu, Alexey Ignatiev, Peter J. Stuckey, and Pierre Le Bodic. 2020. Computing Optimal Decision Sets with SAT. In CP (LNCS, Vol. 12333), Helmut Simonis (Ed.). Springer, 952--970.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    PODS '24: Companion of the 43rd Symposium on Principles of Database Systems
    June 2024
    27 pages
    ISBN:9798400704833
    DOI:10.1145/3635138
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 09 June 2024

    Check for updates

    Author Tags

    1. explainability language
    2. explainable artificial intelligence
    3. query language

    Qualifiers

    • Research-article

    Funding Sources

    • ANID - Millennium Science Initiative Program

    Conference

    SIGMOD/PODS '24
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 642 of 2,707 submissions, 24%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 66
      Total Downloads
    • Downloads (Last 12 months)66
    • Downloads (Last 6 weeks)31
    Reflects downloads up to 27 Jul 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media