Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1007/978-3-031-46002-9_7guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Formal XAI via Syntax-Guided Synthesis

Published: 14 December 2023 Publication History
  • Get Citation Alerts
  • Abstract

    In this paper, we propose a novel application of syntax-guided synthesis to find symbolic representations of a model’s decision-making process, designed for easy comprehension and validation by humans. Our approach takes input-output samples from complex machine learning models, such as deep neural networks, and automatically derives interpretable mimic programs. A mimic program precisely imitates the behavior of an opaque model over the provided data. We discuss various types of grammars that are well-suited for computing mimic programs for tabular and image input data.
    Our experiments demonstrate the potential of the proposed method: we successfully synthesized mimic programs for neural networks trained on the MNIST and the Pima Indians diabetes data sets. All experiments were performed using the SMT-based cvc5synthesis tool.

    References

    [1]
    Adadi A and Berrada M Peeking inside the black-box: a survey on explainable artificial intelligence (XAI) IEEE Access 2018 6 52138-52160
    [2]
    Ahmed I, Jeon G, and Piccialli F From artificial intelligence to explainable artificial intelligence in industry 4.0: a survey on what, how, and where IEEE Trans. Ind. Inf. 2022 18 8 5031-5042
    [3]
    Alur, R., et al.: Syntax-guided synthesis. In: FMCAD, pp. 1–8. IEEE (2013)
    [4]
    Arrieta AB et al. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI Inf. Fusion 2020 58 82-115
    [5]
    Ashok P, Jackermeier M, Křetínský J, Weinhuber C, Weininger M, and Yadav M dtControl 2.0: explainable strategy representation via decision tree learning steered by experts Tools and Algorithms for the Construction and Analysis of Systems 2021 Cham Springer 326-345
    [6]
    Barbosa, H., et al.: cvc5: a versatile and industrial-strength SMT solver. In: International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2022), pp. 415–442 (2022)
    [7]
    Bassan, S., Katz, G.: Towards formal XAI: formally approximate minimal explanations of neural networks. In: TACAS (1). Lecture Notes in Computer Science, vol. 13993, pp. 187–207. Springer, Heidelberg (2023).
    [8]
    Cano Córdoba, F., et al.: Analyzing intentional behavior in autonomous agents under uncertainty. In: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI-23, pp. 372–381 (2023)
    [9]
    Carr S, Jansen N, and Topcu U Task-aware verifiable rnn-based policies for partially observable markov decision processes J. Artif. Intell. Res. (JAIR) 2021 72 819-847
    [10]
    Chaddad A, Peng J, Xu J, and Bouridane A Survey of explainable AI techniques in healthcare Sensors 2023 23 2 634
    [11]
    Chollet, F.: Simple MNIST convnet (2015). https://keras.io/examples/vision/mnist_convnet/. Accessed 19 July 2023
    [12]
    Costa VG and Pedreira CE Recent advances in decision trees: an updated survey Artif. Intell. Rev. 2023 56 5 4765-4800
    [13]
    Dierl S et al. Rozier KY, Chaudhuri S, et al. Learning symbolic timed models from concrete timed data NASA Formal Methods 2023 Cham Springer 104-121
    [14]
    Dwivedi, R., et al.: Explainable AI (XAI): core ideas, techniques, and solutions. ACM Comput. Surv. 55(9), 194:1–194:33 (2023)
    [15]
    Fathi, E., Shoja, B.M.: Deep neural networks for natural language processing. In: Handbook of Statistics, vol. 38, pp. 229–316. Elsevier (2018)
    [16]
    Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, and Pedreschi D A survey of methods for explaining black box models ACM Comput. Surv. (CSUR) 2018 51 5 1-42
    [17]
    Gulwani S, Harris WR, and Singh R Spreadsheet data manipulation using examples Commun. ACM 2012 55 8 97-105
    [18]
    Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., Meger, D.: Deep reinforcement learning that matters. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)
    [19]
    Ignatiev, A.: Towards trustable explainable AI. In: Bessiere, C. (ed.) Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pp. 5154–5158 (2020). https://www.ijcai.org/
    [20]
    Izza Y, Huang X, Ignatiev A, Narodytska N, Cooper MC, and Marques-Silva J On computing probabilistic abductive explanations Int. J. Approx. Reason. 2023 159
    [21]
    Jha, S., Gulwani, S., Seshia, S.A., Tiwari, A.: Oracle-guided component-based program synthesis. In: 2010 ACM/IEEE 32nd International Conference on Software Engineering (ICSE 2010), vol. 1, pp. 215–224 (2010)
    [22]
    Jüngermann, F., Kretínský, J., Weininger, M.: Algebraically explainable controllers: decision trees and support vector machines join forces. CoRR arXiv:2208.1280 (2022)
    [23]
    LeCun, Y., Cortes, C., Burges, C.J.: The MNIST database (1998). http://yann.lecun.com/exdb/mnist. Accessed 13 Aug 2022
    [24]
    Li, M., Chan, N., Chandra, V., Muriki, K.: Cluster usage policy enforcement using slurm plugins and an HTTP API. In: Jacobs, G.A., Stewart, C.A. (eds.) PEARC 2020: Practice and Experience in Advanced Research Computing, Portland, OR, USA, 27–31 July 2020, pp. 232–238. ACM (2020)
    [25]
    Liang W et al. Advances, challenges and opportunities in creating data for trustworthy AI Nat. Mach. Intell. 2022 4 8 669-677
    [26]
    Lundberg, S.M., Lee, S.: A unified approach to interpreting model predictions. In: NIPS, pp. 4765–4774 (2017)
    [27]
    Marques-Silva, J., Ignatiev, A.: Delivering trustworthy AI through formal XAI. In: AAAI, pp. 12342–12350. AAAI Press (2022)
    [28]
    Minh, D., Wang, H.X., Li, Y.F., Nguyen, T.N.: Explainable artificial intelligence: a comprehensive review. Artif. Intell. Rev., 1–66 (2022)
    [29]
    Mohsen H, El-Dahshan ESA, El-Horbaty ESM, and Salem ABM Classification using deep learning neural networks for brain tumors Future Comput. Inf. J. 2018 3 1 68-71
    [30]
    Molnar, C.: Interpretable Machine Learning, 2 edn. (2022). https://christophm.github.io/interpretable-ml-book
    [31]
    Morton, K., Hallahan, W.T., Shum, E., Piskac, R., Santolucito, M.: Grammar filtering for syntax-guided synthesis. In: AAAI, pp. 1611–1618. AAAI Press (2020)
    [32]
    Neider, D., Ghosh, B.: Probably approximately correct explanations of machine learning models via syntax-guided synthesis. arXiv preprint arXiv:2009.08770 (2020)
    [33]
    Pedregosa F et al. Scikit-learn: machine learning in python J. Mach. Learn. Res. 2011 12 2825-2830
    [34]
    Ranjbar, N., Safabakhsh, R.: Using decision tree as local interpretable model in autoencoder-based LIME. In: CSICC, pp. 1–7. IEEE (2022)
    [35]
    Ribeiro, M.T., Singh, S., Guestrin, C.: “Why Should I Trust You?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2016), pp. 1135–1144 (2016)
    [36]
    Shih, A., Choi, A., Darwiche, A.: A symbolic approach to explaining bayesian network classifiers. In: Lang, J. (ed.) Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, Stockholm, Sweden, 13–19 July 2018, pp. 5103–5111 (2018). https://www.ijcai.org/
    [37]
    Smith, J.W., Everhart, J.E., Dickson, W., Knowler, W.C., Johannes, R.S.: Using the ADAP learning algorithm to forecast the onset of diabetes mellitus. In: Proceedings of the Annual Symposium on Computer Application in Medical Care (1988)
    [38]
    Tappler M, Aichernig BK, Bacci G, Eichlseder M, and Larsen KG L*-based learning of markov decision processes (extended version) Formal Aspects Comput. 2021 33 4–5 575-615
    [39]
    Verma, A., Murali, V., Singh, R., Kohli, P., Chaudhuri, S.: Programmatically interpretable reinforcement learning. In: International Conference on Machine Learning (ICML), pp. 5045–5054. PMLR (2018)
    [40]
    Wachter S, Mittelstadt B, and Russell C Counterfactual explanations without opening the black box: automated decisions and the GDPR Harvard J. Law Technol. 2017 31 841
    [41]
    Wang F, Cao Z, Tan L, and Zong H Survey on learning-based formal methods: taxonomy, applications and possible future directions IEEE Access 2020 8 108561-108578

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Guide Proceedings
    Bridging the Gap Between AI and Reality: First International Conference, AISoLA 2023, Crete, Greece, October 23–28, 2023, Proceedings
    Oct 2023
    453 pages
    ISBN:978-3-031-46001-2
    DOI:10.1007/978-3-031-46002-9
    • Editor:
    • Bernhard Steffen

    Publisher

    Springer-Verlag

    Berlin, Heidelberg

    Publication History

    Published: 14 December 2023

    Author Tags

    1. Syntax-Guided Synthesis (SyGuS)
    2. Explainable Machine Learning
    3. Program Synthesis
    4. Programming by Example (PbE)

    Qualifiers

    • Article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 0
      Total Downloads
    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 11 Aug 2024

    Other Metrics

    Citations

    View Options

    View options

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media