Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Explaining AI Decisions Using Efficient Methods for Learning Sparse Boolean Formulae

Published: 01 December 2019 Publication History

Abstract

In this paper, we consider the problem of learning Boolean formulae from examples obtained by actively querying an oracle that can label these examples as either positive or negative. This problem has received attention in both machine learning as well as formal methods communities, and it has been shown to have exponential worst-case complexity in the general case as well as for many restrictions. In this paper, we focus on learning sparse Boolean formulae which depend on only a small (but unknown) subset of the overall vocabulary of atomic propositions. We propose two algorithms—first, based on binary search in the Hamming space, and the second, based on random walk on the Boolean hypercube, to learn these sparse Boolean formulae with a given confidence. This assumption of sparsity is motivated by the problem of mining explanations for decisions made by artificially intelligent (AI) algorithms, where the explanation of individual decisions may depend on a small but unknown subset of all the inputs to the algorithm. We demonstrate the use of these algorithms in automatically generating explanations of these decisions. These explanations will make intelligent systems more understandable and accountable to human users, facilitate easier audits and provide diagnostic information in the case of failure. The proposed approach treats the AI algorithm as a black-box oracle; hence, it is broadly applicable and agnostic to the specific AI algorithm. We show that the number of examples needed for both proposed algorithms only grows logarithmically with the size of the vocabulary of atomic propositions. We illustrate the practical effectiveness of our approach on a diverse set of case studies.

References

[1]
Abouzied, A., Angluin, D., Papadimitriou, C., Hellerstein, J.M., Silberschatz, A.: Learning and verifying quantified boolean queries by example. In: ACM Symposium on Principles of Database Systems, pp. 49–60. ACM (2013)
[2]
Angluin, D.: Computational learning theory: survey and selected bibliography. In: ACM Symposium on Theory of Computing, pp. 351–369. ACM (1992)
[3]
Angluin, D., Kharitonov, M.: When won’t membership queries help? In: ACM Symposium on Theory of Computing, pp. 444–454. ACM (1991)
[4]
Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., Ažller, K.-R.M.: How to explain individual classification decisions. J. Mach. Learn. Res. 11(Jun), 1803–1831 (2010)
[5]
Bittner, B., Bozzano, M., Cimatti, A., Gario, M., Griggio, A.: Towards pareto-optimal parameter synthesis for monotonie cost functions. In: FMCAD, pp. 23–30 (2014)
[6]
Boigelot, B., Godefroid, P.: Automatic synthesis of specifications from the dynamic observation of reactive programs. In: TACAS, pp. 321–333 (1997)
[7]
Boneh, A., Hofri, M.: The coupon-collector problem revisiteda survey of engineering problems and computational methods. Stoch. Models 13(1), 39–66 (1997)
[8]
Botinčan, M., Babić, D.: Sigma*: symbolic learning of input-output specifications. In: POPL, pp. 443–456 (2013)
[9]
Cook, B., Kroening, D., Rümmer, P., Wintersteiger, C.M.: Ranking function synthesis for bit-vector relations. FMSD 43(1), 93–120 (2013)
[10]
de Fortuny, E.J., Martens, D.: Active learning-based pedagogical rule extraction. IEEE Trans. Neural Netw. Learn. Syst. 26(11), 2664–2677 (2015)
[11]
Dutta, S., Jha, S., Sanakaranarayanan, S., Tiwari, A.: Output range analysis for deep neural networks. arXiv preprint, arXiv:1709.09130 (2017)
[12]
Ehrenfeucht, A., Haussler, D., Kearns, M., Valiant, L.: A general lower bound on the number of examples needed for learning. Inf. Comput. 82(3), 247–261 (1989)
[13]
Elizalde, F., Sucar, E., Noguez, J., Reyes, A.: Generating Explanations Based on Markov Decision Processes, pp. 51–62 (2009)
[14]
Feng, C., Muggleton, S.: Towards inductive generalisation in higher order logic. In: 9th International Workshop on Machine learning, pp. 154–162 (2014)
[15]
Godefroid, P., Taly, A.: Automated synthesis of symbolic instruction encodings from i/o samples. SIGPLAN Not. 47(6), 441–452 (2012)
[16]
Goldsmith, J., Sloan, R.H., Szörényi, B., Turán, G.: Theory revision with queries: horn, read-once, and parity formulas. Artif. Intell. 156(2), 139–176 (2004)
[17]
Gurfinkel, A., Belov, A., Marques-Silva, J.: Synthesizing Safe Bit-Precise Invariants, pp. 93–108 (2014)
[18]
Harbers, M., Meyer, J.-J., van den Bosch, K.: Explaining simulations through self explaining agents. J. Artif. Soc. Soc. Simul. 12, 6 (2010)
[19]
Hellerstein, L., Servedio, R.A.: On pac learning algorithms for rich boolean function classes. Theor. Comput. Sci. 384(1), 66–76 (2007)
[20]
Jha, S., Gulwani, S., Seshia, S.A., Tiwari, A.: In: Oracle-guided component-based program synthesis. In: ICSE, pp. 215–224. IEEE (2010)
[21]
Jha, S., Raman, V., Pinto, A., Sahai, T., Francis, M.: On learning sparse boolean formulae for explaining AI decisions. In: NASA Formal Methods—9th International Symposium, NFM 2017, Moffett Field, CA, USA, May 16–18, 2017, Proceedings, pp. 99–114 (2017)
[22]
Jha, S., Seshia, S.A.: A theory of formal synthesis via inductive learning. In: Acta Informatica, Special Issue on Synthesis (2016)
[23]
Jha, S., Seshia, S.A., Tiwari, A.: Synthesis of optimal switching logic for hybrid systems. In: EMSOFT, pp. 107–116. ACM (2011)
[24]
Kearns, M., Li, M., Valiant, L.: Learning boolean formulas. J. ACM 41(6), 1298–1328 (1994)
[25]
Kearns, M., Valiant, L.: Cryptographic limitations on learning boolean formulae and finite automata. J. ACM 41(1), 67–95 (1994)
[26]
Lakkaraju, H., Bach, S.H., Leskovec, J.: Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1675–1684. ACM (2016)
[27]
LaValle, S.M.: Planning Algorithms. Cambridge University Press, Cambridge (2006)
[28]
Lecun, Y., Cortes, C.: The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/
[29]
Lee, J., Moray, N.: Trust, control strategies and allocation of function in human–machine systems. Ergonomics 35(10), 1243–1270 (1992)
[30]
Mansour, Y.: Learning boolean functions via the Fourier transform. In: Theoretical Advances in Neural Computation and Learning, pp. 391–424 (1994)
[31]
Nau, D., Ghallab, M., Traverso, P.: Automated Planning: Theory & Practice. Morgan Kaufmann Publishers Inc., San Francisco (2004)
[32]
Pitt, L., Valiant, L.G.: Computational limitations on learning from examples. J. ACM 35(4), 965–984 (1988)
[33]
Raman, V.: Reactive switching protocols for multi-robot high-level tasks. In: IEEE/RSJ, pp. 336–341 (2014)
[34]
Raman, V., Lignos, C., Finucane, C., Lee, K.C.T., Marcus, M.P., Kress-Gazit, H.: Sorry Dave, I’m Afraid I can’t do that: Explaining unachievable robot tasks using natural language. In: Robotics: Science and Systems (2013)
[35]
Reynolds, A., Deters, M., Kuncak, V., Tinelli, C., Barrett, C.: Counterexample-Guided Quantifier Instantiation for Synthesis in SMT, pp. 198–216 (2015)
[36]
Ribeiro, M.T., Singh, S., Guestrin, C.: Why Should I Trust You?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM (2016)
[37]
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why Should I Trust You?”: explaining the predictions of any classifier. In: KDD, pp. 1135–1144 (2016)
[38]
Robnik-Šikonja, M., Kononenko, I.: Explaining classifications for individual instances. IEEE Trans. Knowl. Data Eng. 20(5), 589–600 (2008)
[39]
Russell, J., Cohn, R.: OODA loop. In: Book on Demand (2012)
[40]
Sankaranarayanan, S.: Automatic invariant generation for hybrid systems using ideal fixed points. In: HSCC, pp. 221–230 (2010)
[41]
Sankaranarayanan, S., Miller, C., Raghunathan, R., Ravanbakhsh, H., Fainekos, G.: A model-based approach to synthesizing insulin infusion pump usage parameters for diabetic patients. In: Annual Allerton Conference on Communication, Control, and Computing, pp. 1610–1617 (2012)
[42]
Sankaranarayanan, S., Sipma, H.B., Manna, Z.: Constructing invariants for hybrid systems. FMSD 32(1), 25–55 (2008)
[43]
Štrumbelj, E., Kononenko, I.: Explaining prediction models and individual predictions with feature contributions. KIS 41(3), 647–665 (2014)
[44]
Urban, C., Gurfinkel, A., Kahsai, T.: Synthesizing Ranking Functions from Bits and Pieces, pp. 54–70 (2016)
[45]
Yuan, C., Lim, H., Lu, T.-C.: Most relevant explanation in bayesian networks. J. Artif. Intell. Res. (JAIR) 42, 309–352 (2011)
[46]
Zintgraf, L.M., Cohen, T.S., Adel, T., Welling, M.: Visualizing deep neural network decisions: prediction difference analysis. arXiv preprint arXiv:1702.04595 (2017)

Cited By

View all
  • (2023)Learning Monitor Ensembles for Operational Design DomainsRuntime Verification10.1007/978-3-031-44267-4_14(271-290)Online publication date: 3-Oct-2023
  • (2022)Learning to reason with neural networksProceedings of the 36th International Conference on Neural Information Processing Systems10.5555/3600270.3600466(2709-2722)Online publication date: 28-Nov-2022
  • (2022)Toward verified artificial intelligenceCommunications of the ACM10.1145/350391465:7(46-55)Online publication date: 21-Jun-2022
  • Show More Cited By

Index Terms

  1. Explaining AI Decisions Using Efficient Methods for Learning Sparse Boolean Formulae
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image Journal of Automated Reasoning
          Journal of Automated Reasoning  Volume 63, Issue 4
          Dec 2019
          317 pages

          Publisher

          Springer-Verlag

          Berlin, Heidelberg

          Publication History

          Published: 01 December 2019

          Author Tags

          1. Explainable AI
          2. Boolean formula learning
          3. Machine learning
          4. Formal methods
          5. Interpretable AI
          6. Sparse learning

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 04 Oct 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2023)Learning Monitor Ensembles for Operational Design DomainsRuntime Verification10.1007/978-3-031-44267-4_14(271-290)Online publication date: 3-Oct-2023
          • (2022)Learning to reason with neural networksProceedings of the 36th International Conference on Neural Information Processing Systems10.5555/3600270.3600466(2709-2722)Online publication date: 28-Nov-2022
          • (2022)Toward verified artificial intelligenceCommunications of the ACM10.1145/350391465:7(46-55)Online publication date: 21-Jun-2022
          • (2020)Interpretable Policies from Formally-Specified Temporal Properties2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)10.1109/ITSC45102.2020.9294442(1-7)Online publication date: 20-Sep-2020
          • (2020)From Contrastive to Abductive Explanations and Back AgainAIxIA 2020 – Advances in Artificial Intelligence10.1007/978-3-030-77091-4_21(335-355)Online publication date: 24-Nov-2020

          View Options

          View options

          Get Access

          Login options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media