Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Interpreting Neural Networks Prediction for a Single Instance via Random Forest Feature Contributions

  • Conference paper
  • First Online:
Computational Science – ICCS 2021 (ICCS 2021)

Abstract

In this paper, we are focusing on the problem of interpreting Neural Networks on the instance level. The proposed approach uses the Feature Contributions, numerical values that domain experts further interpret to reveal some phenomena about a particular instance or model behaviour. In our method, Feature Contributions are calculated from the Random Forest model trained to mimic the Artificial Neural Network’s classification as close as possible. We assume that we can trust the Feature Contributions results when both predictions are the same, i.e., Neural Network and Feature Contributions give the same results. The results show that this highly depends on the level the Neural Network is trained because the error is then propagated to the Random Forest model. For good trained ANNs, we can trust in interpretation based on Feature Contributions on average in 80%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bache, K., Lichman, M.: UCI machine learning repository. University of California, Irvine, School of Information and Computer Sciences (2013). http://archive.ics.uci.edu/ml/datasets. Accessed 28 Aug 2016

  2. Fan, F.-L., Xiong, J., Li, M., Wang, G.: On interpretability of artificial neural networks: a survey (2020). https://arxiv.org/ftp/arxiv/papers/2001/2001.02522.pdf

  3. de Fortuny, E., Martens, D.: Active learning-based pedagogical rule extraction. IEEE Trans. Neural Netw. Learn. Syst. 26(11), 2664–2677 (2015)

    Article  MathSciNet  Google Scholar 

  4. Gevrey, M., Dimopoulos, I., Lek, S.: Two-way interaction of input variables in the sensitivity analysis of neural network models. Ecol. Modell. 195(1–2), 43–50 (2006). Selected Papers from the Third Conference of the International Society for Ecological Informatics (ISEI), 26–30 August 2002, Grottaferrata, Rome, Italy

    Article  Google Scholar 

  5. Huysmans, J., Baesens, B., Vanthienen, J.: Using rule extraction to improve the comprehensibility of predictive models In: Research 0612, K.U.Leuven (2006)

    Google Scholar 

  6. Jha, A., Aicher, J.K., Gazzara, M.R., Singh, D., Barash, Y.: Enhanced integrated gradients: improving interpretability of deep learning models using splicing codes as a case study. Genome Biol. 149(21) (2020, online)

    Google Scholar 

  7. Kamruzzaman, S.M., Islam, M.M.: An algorithm to extract rules from artificial neural networks for medical diagnosis problems. CoRR abs/1009.4566 (2010)

    Google Scholar 

  8. Kuz’min, V.E., Polishchuk, P.G., Artemenko, A.G., Andronati, S.A.: Interpretation of QSAR models based on random forest methods. Mol. Inf. 30(6–7), 593–603 (2011)

    Article  Google Scholar 

  9. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30. Curran Associates, Inc. (2017). https://proceedings.neurips.cc/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf

  10. Marchese Robinson, R.L., Palczewska, A., Palczewski, J., Kidley, N.: Comparison of the predictive performance and interpretability of random forest and linear models on benchmark data sets. J. Chem. Inf. Model. 57(8), 1773–1792 (2017). https://doi.org/10.1021/acs.jcim.6b00753. pMID: 28715209

    Article  Google Scholar 

  11. Olden, J.D., Joy, M.K., Death, R.G.: An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. Ecol. Model. 178(3–4), 389–397 (2004)

    Article  Google Scholar 

  12. de Oña, J., Garrido, C.: Extracting the contribution of independent variables in neural network models: a new approach to handle instability. Neural Comput. Appl. 25(3), 859–869 (2014)

    Article  Google Scholar 

  13. Palczewska, A., Palczewski, J., Marchese Robinson, R., Neagu, D.: Interpreting random forest classification models using a feature contribution method. In: Bouabana-Tebibel, T., Rubin, S.H. (eds.) Integration of Reusable Systems. AISC, vol. 263, pp. 193–218. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-04717-1_9

    Chapter  Google Scholar 

  14. Paliwal, M., Kumar, U.A.: Assessing the contribution of variables in feed forward neural network. Appl. Soft Comput. 11(4), 3690–3696 (2011)

    Article  Google Scholar 

  15. Qin, L.X., Self, S.G.: The clustering of regression models method with applications in gene expression data. Biometrics 62(2), 526–533 (2006)

    Article  MathSciNet  Google Scholar 

  16. rfFC: Random forest feature Contrubutions. https://r-forge.r-project.org/R/?group_id=1725. Accessed 28 Aug 2016

  17. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” Explaining the predictions of any classifier. In: KDD 2016, San Francisco, CA, USA (2016)

    Google Scholar 

  18. Rosenbaum, L., Hinselmann, G., Jahn, A., Zell, A.: Interpreting linear support vector machine models with heat map molecule coloring. J. Cheminf. 3(1), 1–12 (2011)

    Article  Google Scholar 

  19. Sutherland, J., O’Brien, L., Weaver, D.: A comparison of methods for modeling quantitative structure activity relationships. J. Med. Chem. 47(22), 5541–5554 (2004). pMID: 15481990

    Article  Google Scholar 

  20. Tropsha, A., Gramatica, P., Gombar, V.: The importance of being earnest: validation is the absolute essential for successful application and interpretation of QSPR models. Mol. Inf. 22(1), 69–77 (2003)

    Google Scholar 

  21. Wang, T., Guan, S.-U., Ma, J., Liu, F.: Linear feature sensibility for output partitioning in ordered neural incremental attribute learning. In: He, X., et al. (eds.) IScIDE 2015. LNCS, vol. 9243, pp. 373–383. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23862-3_37

    Chapter  Google Scholar 

  22. Yousefzadeh, R., O’Leary, D.P.: Proceedings of The First Mathematical and Scientific Machine Learning Conference, PMLR, vol. 107, pp. 1–26 (2020). http://proceedings.mlr.press/v107/yousefzadeh20a.html

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Anna Palczewska or Urszula Markowska-Kaczmar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Palczewska, A., Markowska-Kaczmar, U. (2021). Interpreting Neural Networks Prediction for a Single Instance via Random Forest Feature Contributions. In: Paszynski, M., Kranzlmüller, D., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M.A. (eds) Computational Science – ICCS 2021. ICCS 2021. Lecture Notes in Computer Science(), vol 12743. Springer, Cham. https://doi.org/10.1007/978-3-030-77964-1_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-77964-1_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-77963-4

  • Online ISBN: 978-3-030-77964-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics