Abstract
There is a need for several applications to interpret the predictions made by machine learning algorithms. In light of this, this paper provides a literature review with the aim of analyzing the use of interpretable frameworks, which are tools coupled to algorithms for a better understanding of output predictions. Altogether, 10 frameworks were cited, and LIME occurred most frequently in the 26 studies included in the review, following a previous analysis of 143 scientific articles. Finally, when the interpretation of the LIME and SHAP frameworks were compared qualitatively, and similar behaviors were observed in the interpretations generated in the neural network and random forest that allow understanding features that influence particular predictions of non-transparent models considered black boxes.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Jordan, M.I., Mitchell, T.M.: Machine learning: trends, perspectives, and prospects. Science 349(6245), 255–260 (2015)
Yang, C., Rangarajan, A., Ranka, S.: Global model interpretation via recursive partitioning (2018)
Ahmad, I.: 40 algorithms every programmer should know: hone your problem-solving skills by learning different algorithms and their implementation in Python (2020)
Nielsen, A.: Practical Fairness. O’Reilly Media Inc., Newton (2021)
Masis, S.: Interpretable Machine Learning with python: learn to build interpretable high-performance models with hands-on real-world examples (2021)
Molnar, C.: Interpretable machine learning: a guide for making black box models explainable (2020)
Briner, R., Denyer, D.: Systematic review and evidence synthesis as a practice and scholarship tool (2012)
He, C., Ma, M., Wang, P.: Extract interpretability-accuracy balanced rules from artificial neural networks: a review. Neurocomputing 387, 346–358 (2020)
Xu, Feiyu, Uszkoreit, Hans, Du, Yangzhou, Fan, Wei, Zhao, Dongyan, Zhu, Jun: Explainable AI: a brief survey on history, research areas, approaches and challenges. In: Tang, Jie, Kan, Min-Yen., Zhao, Dongyan, Li, Sujian, Zan, Hongying (eds.) NLPCC 2019. LNCS (LNAI), vol. 11839, pp. 563–574. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32236-6_51
Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: a review of machine learning interpretability methods. Entropy 23, 18 (2020)
Barredo-Arrieta, A., Laña, I., Del Ser, J.: What lies beneath: a note on the explainability of black-box machine learning models for road traffic forecasting (2019)
Oni, O., Qiao, S.: Model-agnostic interpretation of cancer classification with multi-platform genomic data, pp. 34–41 (2019)
Kumari, P., Haddela, P.S.: Use of LIME for human interpretability in sinhala document classification, pp. 97–102 (2019)
Malhi, A., Kampik, T., Pannu, H., Madhikermi, M., Främling, K.: Explaining machine learning-based classifications of in-vivo gastral images, pp. 1–7 (2019)
Czejdo, D., Bhattacharya, S., Spooner, C.: Improvement of protein model scoring using grouping and interpreter for machine learning, pp. 0349–0353 (2019)
Tolan, S., Miron, M., Gómez, E., Castillo, C.: Why machine learning may lead to unfairness: evidence from risk assessment for juvenile justice in catalonia (2019)
Spinner, Thilo, Schlegel, Udo, Hauptmann, Hanna, El-Assady, Mennatallah: explAIner: a visual analytics framework for interactive and explainable machine learning. IEEE Trans. Vis. Comput. Graph. 26, 1064–1074 (2019)
Teso, S., Kersting, K.: Explanatory interactive machine learning, pp. 239–245 (2019)
Nagrecha, S., Dillon, J., Chawla, N.: MOOC dropout prediction: lessons learned from making pipelines interpretable. In: WWW 2017 Companion: Proceedings of the 26th International Conference on World Wide Web Companion (2017)
Zhang, A., Lam, S., Liu, N., Pang, Y., Chan, L., Tang, P.: Development of a radiology decision support system for the classification of MRI brain scans, pp. 107–115 (2018)
De Aquino, R., Cozman, F.: Natural language explanations of classifier behavior, pp. 239–242 (2019)
Mothilal, R., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations, pp. 607–617 (2020)
Preece, A., Harborne, D., Raghavendra, R., Tomsett, R., Braines, D.: Provisioning robust and interpretable AI/ML-based service bundles, pp. 1–9 (2018)
Fong, R., Vedaldi, A.: Interpretable explanations of black boxes by meaningful perturbation (2017)
Singh, J., Anand, A.: EXS: explainable search using local model agnostic interpretability. In: WSDM 2019: Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining (2019)
Zhang, W., Ge, P., Jin, W., Guo, J.: Radar signal recognition based on TPOT and LIME (2018)
Koh, S., Wi, H., Kim, B., Jo, S.: Personalizing the prediction: interactive and interpretable machine learning, pp. 354–359 (2019)
Mampaka, M., Sumbwanyambe, M.: Poor data throughput root cause analysis in mobile networks using deep neural network, pp. 1–6 (2019)
Schuessler, M., Weiß, P.: Minimalistic explanations: capturing the essence of decisions, pp. 1–6 (2019)
El Shawi, R., Sherif, Y., Al-Mallah, M., Sakr, S.: Interpretability in healthcare: a comparative study of local machine learning interpretability techniques. Comput. Intell. 37, 1633–1650 (2020)
Messalas, A., Makris, C., Kanellopoulos, Y.: Model-agnostic interpretability with shapley values (2019)
Prentzas, N., Pattichis, C., Kakas, A.: Integrating machine learning with symbolic reasoning to build an explainable AI model for stroke prediction (2019)
Zhu, X., Ruan, J., Zheng, Q., Dong, B.: IRTED-TL: an inter-region tax evasion detection method based on transfer learning (2018)
Costa, P., Galdran, A., Smailagic, A., Campilho, A.: A weakly-supervised framework for interpretable diabetic retinopathy detection on retinal images. IEEE Access 6, 18747–18758 (2018)
Boer, N., Deutch, D., Frost, N., Milo, T.: Just in time: personal temporal insights for altering model decisions (2020)
Lakkaraju, H., Bach, S., Leskovec, J.: Interpretable decision sets: a joint framework for description and prediction. In: KDD: Proceedings. International Conference on Knowledge Discovery and Data Mining (2016)
Ribeiro, M., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier (2016)
Lundberg, S., Lee, S.-I.: A unified approach to interpreting model predictions (2017)
Vincent, S.: Research Center. https://www.kaggle.com/mathchi/diabetes-data-set, Accessed 4 Oct 2021
Lad, R.: https://www.kaggle.com/richalad/parkinsons-predictions, Accessed 6 Oct 2021
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
de Abreu Araújo, I., Hidaka Torres, R., Neto, N.C.S. (2022). A Review of Framework for Machine Learning Interpretability. In: Schmorrow, D.D., Fidopiastis, C.M. (eds) Augmented Cognition. HCII 2022. Lecture Notes in Computer Science(), vol 13310. Springer, Cham. https://doi.org/10.1007/978-3-031-05457-0_21
Download citation
DOI: https://doi.org/10.1007/978-3-031-05457-0_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-05456-3
Online ISBN: 978-3-031-05457-0
eBook Packages: Computer ScienceComputer Science (R0)