Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

A novel explainable image classification framework: case study on skin cancer and plant disease prediction

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

An explainable/interpretable machine learning model is able to make reasoning about its predictions in understandable terms to humans. These properties are essential in order to trust model’s predictions, especially when these decisions affect critical aspects such as health, rights, security, and educational issues. Image classification is an area in machine learning and computer vision in which convolutional neural networks have flourished since they have shown remarkable performance in such problems. However, these models suffer in terms of transparency, interpretability and explainability, considered as black box models. This work proposes a novel explainable image classification framework applying it on skin cancer and plant diseases prediction problems. This framework combines segmentation and clustering techniques aiming to extract texture features from various subregions of the input image. Then, a feature filtering and cleaning procedure is applied on these extracted features in order to ensure that the proposed model will be also reliable and trustful, while these final extracted features are utilized for training an intrinsic linear white box prediction model. Finally, a hierarchy-based tree approach was created, in order to provide a meaningful interpretation of the model’s decision behavior. The experimental results have shown that the model’s explanations are clearly understandable, reliable, and trustful. Furthermore, regarding the prediction accuracy, the proposed model manages to achieve almost equal performance score (1–2% difference on average) comparing to the state-of-the-art black box convolutional image classification models. Such performance is considered noticeably good since the proposed classifier is an explainable intrinsic white box model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Lu L, Wang X, Carneiro G, Yang L (eds) (2019) Deep Learning and Convolutional Neural Networks for Medical Imaging and Clinical Informatics, Springer International Publishing

  2. Hemanth DJ, Estrela VV (eds) (2017) Deep learning for image processing applications, vol 31, IOS Press

  3. Banan A, Nasiri A, Taheri-Garavand A (2020) Deep learning-based appearance features extraction for automated carp species identification. Aquacult Eng 89:102053

    Article  Google Scholar 

  4. Fan Y, Xu K, Wu H, Zheng Y, Tao B (2020) Spatiotemporal modeling for nonlinear distributed thermal processes based on KL decomposition, MLP and LSTM network. IEEE Access 8:25111–25121

    Article  Google Scholar 

  5. Shamshirband S, Rabczuk T, Chau KW (2019) A survey of deep learning techniques: application in wind and solar energy resources. IEEE Access 7:164650–164666

    Article  Google Scholar 

  6. Zhang J, Chau KW (2009) Multilayer ensemble pruning via novel multi-sub-swarm particle swarm optimization. J UCS 15(4):840–858

    Google Scholar 

  7. Molnar C (2018) Interpretable machine learning: a guide for making black box models explainable. Leanpub

  8. Pintelas E, Livieris IE, Pintelas P (2020) A grey-box ensemble model exploiting black-box accuracy and white-box intrinsic interpretability. Algorithms 13(1):17

    Article  MathSciNet  Google Scholar 

  9. Pintelas E, Liaskos M, Livieris IE, Kotsiantis S, Pintelas P (2020) Explainable machine learning framework for image classification problems: case study on glioma cancer prediction. J Imaging 6:37

    Article  Google Scholar 

  10. Ribeiro MT, Singh S, Guestrin C (2016) “Why should Ι trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining August 2016, San Francisco, CA, USA, 13–17, pp 1135–1144

  11. Robnik-Šikonja M, Bohanec M (2018) Perturbation-based explanations of prediction models. In: Human and Machine Learning; Springer: Cham, Switzerland, pp 159–175

  12. Wachter S, Mittelstadt B, Russell C (2017) Counterfactual explanations without opening the black box: automated decisions and the GPDR. Harv JL Tech 31:841

    Google Scholar 

  13. Selvaraju RR, Cogswell M,Das A, Vedantam R, Parikh D, Batra D (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In: poceedings of the IEEE international conference on computer vision 2017, Venice, Italy, vol 22–29, pp. 618–626.

  14. Lundberg S, Lee S (2017) A unified approach to interpreting model predictions. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R (eds), Advances in neural information processing systems, Curran Associates, Inc., pp 4765–4774

  15. Raschka, S. (2014). An overview of general performance metrics of binary classifier systems. arXiv preprint axXiv:1410.5330

  16. Simonyan K, Zisserman A () Very deep convolutional networks for large-scale image recognition. In: ICLR

  17. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: advances in neural information processing systems, pp 1097–1105

  18. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  19. He K, Sun J (2015) Convolutional neural networks at constrained time cost. In: CVPR

  20. Srivastava RK, Greff K, Schmidhuber J (2015) Highway networks. axXiv:1505.00387

  21. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) GoogLeNet/Inception–Going deeper with convolutions, CVPR

  22. Szegedy C, Ioffe S, Vanhoucke V, Alemi A (2017) Inception v4, inception resnet and the impact of residual connections on learning. In: AAAI, pp. 4278–4284

  23. Liu L, Ouyang W, Wang X, Fieguth P, Chen J, Liu X, Pietikäinen M (2020) Deep learning for generic object detection: a survey. Int J Comput Vision 128(2):261–318

    Article  Google Scholar 

  24. Chollet F (2017) Xception: deep learning with depthwise separable convolutions. In: proceedings of the IEEE conference on computer vision and pattern recognition, pp 1251–1258

  25. Huang G, Liu Z, Weinberger KQ, van der Maaten L (2017a) Densely connected convolutional networks. In: CVPR

  26. Sandler M, Howard, A, Zhu M, Zhmoginov A, Chen LC (2018) Mobilenetv2: inverted residuals and linear bottlenecks. In: proceedings of the IEEE conference on computer vision and pattern recognition, pp 4510–4520

  27. Zoph B, Vasudevan V, Shlens J, Le QV (2018) Learning transferable architectures for scalable image recognition. In: proceedings of the IEEE conference on computer vision and pattern recognition, pp 8697–8710

  28. Tan M, Le (2019) Efficientnet: rethinking model scaling for convolutional neural networks. In: international conference on machine learning, PMLR, p 6105–6114

  29. Sadeghi M, Lee TK, McLean D, Lui H, Atkins MS (2013) Detection and analysis of irregular streaks in dermoscopic images of skin lesions. IEEE Trans Med Imaging 32(5):849–861

    Article  Google Scholar 

  30. Chen G (2008) Collinearity. Encyclopedia of statistics in quality and reliability, p 1.

  31. Huang ZK, Chau KW (2008) A new image thresholding method based on Gaussian mixture model. Appl Math Comput 205(2):899–907

    MathSciNet  MATH  Google Scholar 

  32. Senthilkumaran N, Vaithegi S (2016) Image segmentation by using thresholding techniques for medical images. Comput Sci Eng Int J 6(1):1–13

    Google Scholar 

  33. Savant S (2014) A review on edge detection techniques for image segmentation. Int J Comput Sci Inf Technol 5(4):5898–5900

    Google Scholar 

  34. Soyer HP, Argenziano G, Zalaudek I, Corona R, Sera F, Talamini R (2004) Three-point checklist of dermoscopy. Dermatology 208(1):27–31

    Article  Google Scholar 

  35. Gloster HM Jr, Neal K (2006) Skin cancer in skin of color. J Am Acad Dermatol 55(5):741–760

    Article  Google Scholar 

  36. Arbelaitz O, Gurrutxaga I, Muguerza J, PéRez JM, Perona I (2013) An extensive comparative study of cluster validity indices. Pattern Recogn 46(1):243–256

    Article  Google Scholar 

  37. Charrad M, Ghazzali N, Boiteau V, Niknafs A (2014) NbClust: an R package for determining the relevant number of clusters in a data set. J Stat Softw 61(6):1–36

    Article  Google Scholar 

  38. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: international conference on medical image computing and computer-assisted intervention, Springer, Cham, pp 234–241

  39. Liu J, Han J, Aggarwal C, Reddy C (2013) Spectral clustering

  40. de Winter JC, Gosling SD, Potter J (2016) Comparing the pearson and spearman correlation coefficients across distributions and sample sizes: a tutorial using simulations and empirical data. Psycholo Methods 21(3):273

    Article  Google Scholar 

  41. Sharaff A, Gupta H (2019:. Extra-tree classifier with metaheuristics approach for email classification. In: advances in computer communication and computational sciences, Springer, Singapore, pp 189–197

  42. Velliangiri S, Alagumuthukrishnan S (2019) A review of dimensionality reduction techniques for efficient computation. Proc Comput Sci 165:104–111

    Article  Google Scholar 

  43. Singh A, Ganapathysubramanian B, Singh AK, Sarkar S (2016) Machine learning for high-throughput stress phenotyping in plants. Trends Plant Sci 21(2):110–124

    Article  Google Scholar 

  44. Mokhtar U, Ali MA, Hassanien, AE, Hefny H (2015) Identifying two of tomatoes leaf viruses using support vector machine. In: information systems design and intelligent applications, Springer, New Delhi, pp 771–782

  45. Livieris IE, Stavroyiannis S, Pintelas E, Kotsilieris T, Pintelas P (2021) A dropout weight-constrained recurrent neural network model for forecasting the price of major cryptocurrencies and CCi30 index. Evolving Syst. https://doi.org/10.1007/s12530-020-09361-2

    Article  Google Scholar 

  46. Kingma, D. P.; Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint axXiv:1412.6980

  47. de Winter JC, Gosling SD, Potter J (2016) Comparing the Pearson and Spearman correlation coefficients across distributions and sample sizes: a tutorial using simulations and empirical data. Psychol Methods 21(3):273–290

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Emmanuel Pintelas.

Ethics declarations

Conflicts of interest

The authors declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pintelas, E., Liaskos, M., Livieris, I.E. et al. A novel explainable image classification framework: case study on skin cancer and plant disease prediction. Neural Comput & Applic 33, 15171–15189 (2021). https://doi.org/10.1007/s00521-021-06141-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-06141-0

Keywords