Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

A Review of Framework for Machine Learning Interpretability

  • Conference paper
  • First Online:
Augmented Cognition (HCII 2022)

Abstract

There is a need for several applications to interpret the predictions made by machine learning algorithms. In light of this, this paper provides a literature review with the aim of analyzing the use of interpretable frameworks, which are tools coupled to algorithms for a better understanding of output predictions. Altogether, 10 frameworks were cited, and LIME occurred most frequently in the 26 studies included in the review, following a previous analysis of 143 scientific articles. Finally, when the interpretation of the LIME and SHAP frameworks were compared qualitatively, and similar behaviors were observed in the interpretations generated in the neural network and random forest that allow understanding features that influence particular predictions of non-transparent models considered black boxes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://ieeexplore.ieee.org.

  2. 2.

    http://dl.acm.org.

  3. 3.

    http://sciencedirect.com.

References

  1. Jordan, M.I., Mitchell, T.M.: Machine learning: trends, perspectives, and prospects. Science 349(6245), 255–260 (2015)

    Article  MathSciNet  Google Scholar 

  2. Yang, C., Rangarajan, A., Ranka, S.: Global model interpretation via recursive partitioning (2018)

    Google Scholar 

  3. Ahmad, I.: 40 algorithms every programmer should know: hone your problem-solving skills by learning different algorithms and their implementation in Python (2020)

    Google Scholar 

  4. Nielsen, A.: Practical Fairness. O’Reilly Media Inc., Newton (2021)

    Google Scholar 

  5. Masis, S.: Interpretable Machine Learning with python: learn to build interpretable high-performance models with hands-on real-world examples (2021)

    Google Scholar 

  6. Molnar, C.: Interpretable machine learning: a guide for making black box models explainable (2020)

    Google Scholar 

  7. Briner, R., Denyer, D.: Systematic review and evidence synthesis as a practice and scholarship tool (2012)

    Google Scholar 

  8. He, C., Ma, M., Wang, P.: Extract interpretability-accuracy balanced rules from artificial neural networks: a review. Neurocomputing 387, 346–358 (2020)

    Article  Google Scholar 

  9. Xu, Feiyu, Uszkoreit, Hans, Du, Yangzhou, Fan, Wei, Zhao, Dongyan, Zhu, Jun: Explainable AI: a brief survey on history, research areas, approaches and challenges. In: Tang, Jie, Kan, Min-Yen., Zhao, Dongyan, Li, Sujian, Zan, Hongying (eds.) NLPCC 2019. LNCS (LNAI), vol. 11839, pp. 563–574. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32236-6_51

    Chapter  Google Scholar 

  10. Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: a review of machine learning interpretability methods. Entropy 23, 18 (2020)

    Article  Google Scholar 

  11. Barredo-Arrieta, A., Laña, I., Del Ser, J.: What lies beneath: a note on the explainability of black-box machine learning models for road traffic forecasting (2019)

    Google Scholar 

  12. Oni, O., Qiao, S.: Model-agnostic interpretation of cancer classification with multi-platform genomic data, pp. 34–41 (2019)

    Google Scholar 

  13. Kumari, P., Haddela, P.S.: Use of LIME for human interpretability in sinhala document classification, pp. 97–102 (2019)

    Google Scholar 

  14. Malhi, A., Kampik, T., Pannu, H., Madhikermi, M., Främling, K.: Explaining machine learning-based classifications of in-vivo gastral images, pp. 1–7 (2019)

    Google Scholar 

  15. Czejdo, D., Bhattacharya, S., Spooner, C.: Improvement of protein model scoring using grouping and interpreter for machine learning, pp. 0349–0353 (2019)

    Google Scholar 

  16. Tolan, S., Miron, M., Gómez, E., Castillo, C.: Why machine learning may lead to unfairness: evidence from risk assessment for juvenile justice in catalonia (2019)

    Google Scholar 

  17. Spinner, Thilo, Schlegel, Udo, Hauptmann, Hanna, El-Assady, Mennatallah: explAIner: a visual analytics framework for interactive and explainable machine learning. IEEE Trans. Vis. Comput. Graph. 26, 1064–1074 (2019)

    Google Scholar 

  18. Teso, S., Kersting, K.: Explanatory interactive machine learning, pp. 239–245 (2019)

    Google Scholar 

  19. Nagrecha, S., Dillon, J., Chawla, N.: MOOC dropout prediction: lessons learned from making pipelines interpretable. In: WWW 2017 Companion: Proceedings of the 26th International Conference on World Wide Web Companion (2017)

    Google Scholar 

  20. Zhang, A., Lam, S., Liu, N., Pang, Y., Chan, L., Tang, P.: Development of a radiology decision support system for the classification of MRI brain scans, pp. 107–115 (2018)

    Google Scholar 

  21. De Aquino, R., Cozman, F.: Natural language explanations of classifier behavior, pp. 239–242 (2019)

    Google Scholar 

  22. Mothilal, R., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations, pp. 607–617 (2020)

    Google Scholar 

  23. Preece, A., Harborne, D., Raghavendra, R., Tomsett, R., Braines, D.: Provisioning robust and interpretable AI/ML-based service bundles, pp. 1–9 (2018)

    Google Scholar 

  24. Fong, R., Vedaldi, A.: Interpretable explanations of black boxes by meaningful perturbation (2017)

    Google Scholar 

  25. Singh, J., Anand, A.: EXS: explainable search using local model agnostic interpretability. In: WSDM 2019: Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining (2019)

    Google Scholar 

  26. Zhang, W., Ge, P., Jin, W., Guo, J.: Radar signal recognition based on TPOT and LIME (2018)

    Google Scholar 

  27. Koh, S., Wi, H., Kim, B., Jo, S.: Personalizing the prediction: interactive and interpretable machine learning, pp. 354–359 (2019)

    Google Scholar 

  28. Mampaka, M., Sumbwanyambe, M.: Poor data throughput root cause analysis in mobile networks using deep neural network, pp. 1–6 (2019)

    Google Scholar 

  29. Schuessler, M., Weiß, P.: Minimalistic explanations: capturing the essence of decisions, pp. 1–6 (2019)

    Google Scholar 

  30. El Shawi, R., Sherif, Y., Al-Mallah, M., Sakr, S.: Interpretability in healthcare: a comparative study of local machine learning interpretability techniques. Comput. Intell. 37, 1633–1650 (2020)

    MathSciNet  Google Scholar 

  31. Messalas, A., Makris, C., Kanellopoulos, Y.: Model-agnostic interpretability with shapley values (2019)

    Google Scholar 

  32. Prentzas, N., Pattichis, C., Kakas, A.: Integrating machine learning with symbolic reasoning to build an explainable AI model for stroke prediction (2019)

    Google Scholar 

  33. Zhu, X., Ruan, J., Zheng, Q., Dong, B.: IRTED-TL: an inter-region tax evasion detection method based on transfer learning (2018)

    Google Scholar 

  34. Costa, P., Galdran, A., Smailagic, A., Campilho, A.: A weakly-supervised framework for interpretable diabetic retinopathy detection on retinal images. IEEE Access 6, 18747–18758 (2018)

    Article  Google Scholar 

  35. Boer, N., Deutch, D., Frost, N., Milo, T.: Just in time: personal temporal insights for altering model decisions (2020)

    Google Scholar 

  36. Lakkaraju, H., Bach, S., Leskovec, J.: Interpretable decision sets: a joint framework for description and prediction. In: KDD: Proceedings. International Conference on Knowledge Discovery and Data Mining (2016)

    Google Scholar 

  37. Ribeiro, M., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier (2016)

    Google Scholar 

  38. Lundberg, S., Lee, S.-I.: A unified approach to interpreting model predictions (2017)

    Google Scholar 

  39. Vincent, S.: Research Center. https://www.kaggle.com/mathchi/diabetes-data-set, Accessed 4 Oct 2021

  40. Lad, R.: https://www.kaggle.com/richalad/parkinsons-predictions, Accessed 6 Oct 2021

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ivo de Abreu Araújo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

de Abreu Araújo, I., Hidaka Torres, R., Neto, N.C.S. (2022). A Review of Framework for Machine Learning Interpretability. In: Schmorrow, D.D., Fidopiastis, C.M. (eds) Augmented Cognition. HCII 2022. Lecture Notes in Computer Science(), vol 13310. Springer, Cham. https://doi.org/10.1007/978-3-031-05457-0_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-05457-0_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-05456-3

  • Online ISBN: 978-3-031-05457-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics