Abstract
In the last years many Machine Learning-based systems have achieved an extraordinary performance in a wide range of fields, especially Natural Language Processing and Computer Vision. The construction of appropriate and meaningful explanations for the decisions of those systems, which are in may cases considered black boxes, is a hot research topic. Most of the state-of-the-art explanation systems in the field rely on generating synthetic instances to test them on the black box, analyse the results and build an interpretable model, which is then used to provide an understandable explanation. In this paper we focus on this generation process and propose Guided-LORE, a variant of the well-known LORE method. We conceptualise the problem of neighbourhood generation as a search and solve it using the Uniform Cost Search algorithm. The experimental results on four datasets reveal the effectiveness of our method when compared with the most relevant related works.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., Müller, K.R.: How to explain individual classification decisions. J. Mach. Learn. Res. 11, 1803–1831 (2010)
Briand, L.C., Brasili, V., Hetmanski, C.J.: Developing interpretable models with optimized set reduction for identifying high-risk software components. IEEE Trans. Softw. Eng. 19(11), 1028–1044 (1993)
Carroll, J.B.: An analytical solution for approximating simple structure in factor analysis. Psychometrika 18(1), 23–38 (1953)
Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., Elhadad, N.: Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1721–1730. KDD 2015, Association for Computing Machinery, New York, NY, USA (2015)
Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., Elhadad, N.: Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1721–1730 (2015)
Craven, M., Shavlik, J.: Extracting tree-structured representations of trained networks. Adv. Neural. Inf. Process. Syst. 8, 24–30 (1995)
Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., Giannotti, F.: Local rule-based explanations of black box decision systems. arXiv preprint arXiv:1805.10820 (2018)
Guillaume, S.: Designing fuzzy inference systems from data: an interpretability-oriented review. IEEE Trans. Fuzzy Syst. 9(3), 426–443 (2001)
Krause, J., Perer, A., Ng, K.: Interacting with predictions: visual inspection of black-box machine learning models. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 5686–5697 (2016)
Letham, B., Rudin, C., McCormick, T.H., Madigan, D., et al.: Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model. Ann. Appl. Stat. 9(3), 1350–1371 (2015)
Lin, J., Keogh, E., Wei, L., Lonardi, S.: Experiencing sax: a novel symbolic representation of time series. Data Min. Knowl. Discov. 15(2), 107–144 (2007)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, pp. 4765–4774 (2017)
Molnar, C.: Interpretable Machine Learning. Lulu. com (2020)
Qian, L., Zheng, H., Zhou, H., Qin, R., Li, J.: Classification of time seriesgene expression in clinical studies via integration of biological network. PLoS ONE 8(3), e58383 (2013)
Revelle, W., Rocklin, T.: Very simple structure: an alternative procedure for estimating the optimal number of interpretable factors. Multivar. Behav. Res. 14(4), 403–414 (1979)
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM (2016)
Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. AAAI 18, 1527–1535 (2018)
Ridgeway, G., Madigan, D., Richardson, T., O’Kane, J.: Interpretable boosted naïve bayes classification. In: KDD, pp. 101–104 (1998)
Schielzeth, H.: Simple means to improve the interpretability of regression coefficients. Methods Ecol. Evol. 1(2), 103–113 (2010)
Strumbelj, E., Kononenko, I.: An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res. 11, 1–18 (2010)
Ustun, B., Rudin, C.: Supersparse linear integer models for optimized medical scoring systems. Mach. Learn. 102(3), 349–391 (2015). https://doi.org/10.1007/s10994-015-5528-6
Wang, F., Rudin, C.: Falling rule lists. In: Artificial Intelligence and Statistics, pp. 1013–1022 (2015)
Wilson, D.R., Martinez, T.R.: Improved heterogeneous distance functions. J. Artif. Intell. Res. 6, 1–34 (1997)
Acknowledgements
The authors would like to thank the support of the FIS project PI18/00169 (Fondos FEDER) and the URV grant 2018-PFR-B2-61. The first author is supported by a URV Martí Franquès predoctoral grant.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Maaroof, N., Moreno, A., Valls, A., Jabreel, M. (2021). Guided-LORE: Improving LORE with a Focused Search of Neighbours. In: Heintz, F., Milano, M., O'Sullivan, B. (eds) Trustworthy AI - Integrating Learning, Optimization and Reasoning. TAILOR 2020. Lecture Notes in Computer Science(), vol 12641. Springer, Cham. https://doi.org/10.1007/978-3-030-73959-1_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-73959-1_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-73958-4
Online ISBN: 978-3-030-73959-1
eBook Packages: Computer ScienceComputer Science (R0)