Effective Melanoma Recognition Using Deep Convolutional Neural Network with Covariance Discriminant Loss
Abstract
:1. Introduction
- (1)
- We propose a melanoma recognition approach with covariance discriminant (CovD) loss and DCNN, ensuring feature representation and classification ability for melanoma and non-melanoma. Moreover, the proposed loss has two formulations: CovD contrastive loss and CovD triplet loss.
- (2)
- We formulate a novel embedding loss, namely covariance discriminant loss, to separate the classes in the feature space. By combing cross entropy (CE) loss and CovD loss, the recognition model can be optimized simultaneously from the views of model output and feature representation. The learned features are rectified by the CovD loss using the first and second distance, enlarging inter-class distance, and reducing the intra-class variations. Further, according to the CovD loss, we formulate the corresponding minority hard sample mining algorithm, to select the misclassified samples or samples with improper feature representation. Moreover, we analyze the relationship between CovD loss and other losses.
- (3)
- Extensive experiments on the International Symposium on Biomedical Imaging (ISBI) 2018 Skin Lesion Analysis dataset demonstrate that the proposed loss can boost the performance consistently, and our method outperforms the comparison methods.
2. Related Work
2.1. Melanoma Recognition
2.2. Learning from Imbalanced Data
3. Methodology
3.1. Embedding Loss
3.2. Covariance Discriminant Loss
3.3. Minority Hard Sample Mining
Algorithm 1. The Parameter Updating Algorithm |
Input: Training images and the corresponding labels , parameters of embedding model, parameters of the classification model, balance parameter λ, learning rate lr, training epochs L. 1: i = 0 2: While i < L do: 3: i = i + 1, , 4: , //Perform feature extraction and classification. 5: //Select positive and negative samples. 6: , 7: , , , 8: , .//Update the model parameters. 9: end while Output: , . |
3.4. Relationship with Other Losses
4. Experiments
4.1. Dataset and Experimental Setting
4.2. Evaluation Metrics
4.3. Ablation Study
4.4. Comparison with Other Methods
4.5. Feature Visualization
4.6. Discussion
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Siegel, R.; Miller, K.D.; Fedewa, S.A.; Ahnen, D.J.; Meester, R.G.S.; Barzi, A.; Jemal, A. Colorectal cancer statistics, 2017. CA Cancer J. Clin. 2017, 67, 177–193. [Google Scholar] [CrossRef] [PubMed]
- Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nat. Cell Biol. 2017, 542, 115–118. [Google Scholar] [CrossRef] [PubMed]
- Green, A.; Martin, N.; Pfitzner, J.; O’Rourke, M.; Knight, N. Computer image analysis in the diagnosis of melanoma. J. Am. Acad. Dermatol. 1994, 31, 958–964. [Google Scholar] [CrossRef]
- Rubegni, P.; Cevenini, G.; Burroni, M.; Perotti, R.; Dell’Eva, G.; Sbano, P.; Miracco, C.; Luzi, P.; Tosi, P.; Barbini, P.; et al. Automated diagnosis of pigmented skin lesions. Int. J. Cancer 2002, 101, 576–580. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Celebi, M.E.; Kingravi, H.A.; Uddin, B.; Iyatomi, H.; Aslandogan, Y.A.; Stoecker, W.V.; Moss, R.H. A methodological approach to the classification of dermoscopy images. Comput. Med. Imaging Graph. 2007, 31, 362–373. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Situ, N.; Yuan, X.; Chen, J.; Zouridakis, G. Malignant melanoma detection by Bag-of-Features classification. In Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 21–24 August 2008; pp. 3110–3113. [Google Scholar]
- Nasr-Esfahani, E.; Samavi, S.; Karimi, N.; Soroushmehr, S.; Jafari, M.; Ward, K.; Najarian, K. Melanoma detection by analysis of clinical images using convolutional neural network. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2008; pp. 1373–1376. [Google Scholar]
- Demyanov, S.; Chakravorty, R.; Abedini, M.; Halpern, A.; Garnavi, R. Classification of dermoscopy patterns using deep convolutional neural networks. In Proceedings of the 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 364–368. [Google Scholar]
- Yu, Z.; Ni, D.; Chen, S.; Qin, J.; Li, S.; Wang, T.; Lei, B. Hybrid dermoscopy image classification framework based on deep convolutional neural network and Fisher vector. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI), Melbourne, Australia, 18–21 April 2017; pp. 301–304. [Google Scholar]
- Ge, Z.; Demyanov, S.; Bozorgtabar, B.; Abedini, M.; Chakravorty, R.; Bowling, A.; Garnavi, R. Exploiting local and generic features for accurate skin lesions classification using clinical and dermoscopy imaging. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI), Melbourne, Australia, 18–21 April 2017; pp. 986–990. [Google Scholar]
- Harangi, B. Skin lesion classification with ensembles of deep convolutional neural networks. J. Biomed. Inform. 2018, 86, 25–32. [Google Scholar] [CrossRef]
- Mahbod, A.; Schaefer, G.; Wang, C.; Ecker, R.; Ellinge, I. Skin Lesion Classification Using Hybrid Deep Neural Networks. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1229–1233. [Google Scholar]
- Yu, L.; Chen, H.; Dou, Q.; Qin, J.; Heng, P.-A. Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks. IEEE Trans. Med. Imaging 2016, 36, 994–1004. [Google Scholar] [CrossRef]
- Yan, Y.; Kawahara, J.; Hamarneh, G. Melanoma Recognition via Visual Attention. In Proceedings of the International Conference on Information Processing in Medical Imaging, Hong Kong, China, 2–7 June 2019; pp. 793–804. [Google Scholar]
- Yang, J.; Xie, F.; Fan, H.; Jiang, Z.; Liu, J. Classification for Dermoscopy Images Using Convolutional Neural Networks Based on Region Average Pooling. IEEE Access 2018, 6, 65130–65138. [Google Scholar] [CrossRef]
- Gessert, N.; Sentker, T.; Madesta, F.; Schmitz, R.; Kniep, H.; Baltruschat, I.; Werner, R.; Schlaefer, A. Skin Lesion Classification Using CNNs With Patch-Based Attention and Diagnosis-Guided Loss Weighting. IEEE Trans. Biomed. Eng. 2020, 67, 495–503. [Google Scholar] [CrossRef] [Green Version]
- Ren, J. ANN vs. SVM: Which one performs better in classification of MCCs in mammogram imaging. Knowl. Based Syst. 2012, 26, 144–153. [Google Scholar] [CrossRef] [Green Version]
- Zhang, J.; Xie, Y.; Xia, Y.; Shen, C. Attention Residual Learning for Skin Lesion Classification. IEEE Trans. Med. Imaging 2019, 38, 2092–2103. [Google Scholar] [CrossRef] [PubMed]
- Zafar, K.; Gilani, S.O.; Waris, A.; Ahmed, A.; Jamil, M.; Khan, M.N.; Kashif, A.S. Skin Lesion Segmentation from Dermoscopic Images Using Convolutional Neural Network. Sensors 2020, 20, 1601. [Google Scholar] [CrossRef] [Green Version]
- Huang, C.; Li, Y.; Loy, C.C.; Tang, X. Learning Deep Representation for Imbalanced Classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 5375–5384. [Google Scholar]
- Yan, Y.; Chen, M.; Shyu, M.-L.; Chen, S.-C. Deep Learning for Imbalanced Multimedia Data Classification. In Proceedings of the 2015 IEEE international symposium on multimedia (ISM), Miami, FL, USA, 14–16 December 2015; pp. 483–488. [Google Scholar] [CrossRef]
- Pouyanfar, S.; Tao, Y.; Mohan, A.; Tian, H.; Kaseb, A.S.; Gauen, K.; Dailey, R.; Aghajanzadeh, S.; Lu, Y.-H.; Chen, S.-C.; et al. Dynamic Sampling in Convolutional Neural Networks for Imbalanced Data Classification. In Proceedings of the 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), Miami, FL, USA, 10–12 April 2018; pp. 112–117. [Google Scholar]
- Zabalza, J.; Ren, J.; Zheng, J.; Zhao, H.; Qing, C.; Yang, Z.; Du, P.; Marshall, S. Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging. Neurocomputing 2016, 185, 1–10. [Google Scholar] [CrossRef] [Green Version]
- Wang, S.; Liu, W.; Wu, J.; Cao, L.; Meng, Q.; Kennedy, P.J. Training deep neural networks on imbalanced data sets. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 4368–4374. [Google Scholar]
- Zhang, X.; Fang, Z.; Wen, Y.; Li, Z.; Qiao, Y. Range Loss for Deep Face Recognition with Long-Tailed Training Data. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 5419–5428. [Google Scholar]
- Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Sarafianos, N.; Xu, X.; Kakadiaris, I. Deep Imbalanced Attribute Classification Using Visual Attention Aggregation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 708–725. [Google Scholar]
- Yan, Y.; Ren, J.; Yu, H.; Zhao, H.; Han, J.; Li, X.; Marshall, S.; Zhan, J. Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement. Pattern Recognit. 2018, 79, 65–78. [Google Scholar] [CrossRef] [Green Version]
- Zhang, C.; Tan, K.C.; Ren, R. Training cost-sensitive Deep Belief Networks on imbalance data problems. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 4362–4367. [Google Scholar]
- Khan, S.H.; Hayat, M.; Bennamoun, M.; Sohel, F.A.; Togneri, R. Cost-Sensitive Learning of Deep Feature Representations from Imbalanced Data. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 3573–3587. [Google Scholar] [CrossRef] [Green Version]
- Ren, M.; Zeng, W.; Yang, B.; Urtasun, R. Learning to Reweight Examples for Robust Deep Learning. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 4334–4343. [Google Scholar]
- Hadsell, R.; Chopra, S.; LeCun, Y. Dimensionality Reduction by Learning an Invariant Mapping. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), New York, NY, USA, 17–22 June 2006; pp. 1735–1742. [Google Scholar]
- Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A unified embedding for face recognition and clustering. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 8–12 June 2015; pp. 815–823. [Google Scholar]
- Dong, Q.; Gong, S.; Zhu, X. Class Rectification Hard Mining for Imbalanced Deep Learning. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 1869–1878. [Google Scholar]
- Ando, S.; Huang, C.Y. Deep Over-sampling Framework for Classifying Imbalanced Data. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Skopje, Macedonia, 18–22 September 2017; pp. 770–785. [Google Scholar]
- Wang, Y.X.; Ramanan, D.; Hebert, M. Learning to model the tail. In Proceedings of theAdvances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 7029–7039. [Google Scholar]
- Argenziano, G.; Fabbrocini, G.; Carli, P.; De Giorgi, V.; Sammarco, E.; Delfino, M. Epiluminescence Microscopy for the Diagnosis of Doubtful Melanocytic Skin Lesions. Arch. Dermatol. 1998, 134, 1563–1570. [Google Scholar] [CrossRef] [Green Version]
- Krawczyk, B. Learning from imbalanced data: Open challenges and future directions. Prog. Artif. Intell. 2016, 5, 221–232. [Google Scholar] [CrossRef] [Green Version]
- Johnson, J.M.; Khoshgoftaar, T.M. Survey on deep learning with class imbalance. J. Big Data 2019, 6, 27. [Google Scholar] [CrossRef]
- Zhang, H.; Wu, C.; Zhang, Z.; Zhu, Y.; Zhang, Z.; Lin, H.; Sun, Y.; He, T.; Mueller, J.; Manmatha, R.; et al. Resnest: Split-attention networks. arXiv 2020, arXiv:2004.08955. [Google Scholar]
- Lin, T.-Y.; Roychowdhury, A.; Maji, S. Bilinear CNN Models for Fine-Grained Visual Recognition. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1449–1457. [Google Scholar]
- Codella, N.; Rotemberg, V.; Tschandl, P.; Celebi, M.E.; Dusza, S.; Gutman, D.; Helba, B.; Kalloo, A.; Liopyris, K.; Marchetti, M.; et al. Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC). arXiv 2019, arXiv:1902.03368. [Google Scholar]
- Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 180161. [Google Scholar] [CrossRef] [PubMed]
- Durand, T.; Mordan, T.; Thome, N.; Cord, M. WILDCAT: Weakly Supervised Learning of Deep ConvNets for Image Classification, Pointwise Localization and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5957–5966. [Google Scholar]
- Diba, A.; Sharma, V.; Pazandeh, A.; Pirsiavash, H.; Van Gool, L. Weakly Supervised Cascaded Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5131–5139. [Google Scholar]
- Leibig, C.; Allken, V.; Ayhan, M.S.; Berens, P.; Wahl, S. Leveraging uncertainty information from deep neural networks for disease detection. Sci. Rep. 2017, 7, 17816. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Rączkowski, Ł.; Możejko, M.; Zambonelli, J.; Szczurek, E. ARA: Accurate, reliable and active histopathological image classification framework with Bayesian deep learning. Sci. Rep. 2019, 9, 14347. [Google Scholar] [CrossRef] [Green Version]
- Bengio, Y.; Courville, A.; Vincent, P. Representation Learning: A Review and New Perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
Method | Sensitivity | Specificity | Accuracy | AUC |
---|---|---|---|---|
CE | 0.914 | 0.536 | 0.578 | 0.822 |
CE + Contrastive Loss | 0.831 | 0.838 | 0.837 | 0.922 |
CE + Triplet Loss | 0.874 | 0.752 | 0.765 | 0.896 |
CE + CovD Contrastive Loss | 0.942 | 0.747 | 0.769 | 0.925 |
CE + CovD Triplet Loss | 0.917 | 0.767 | 0.784 | 0.912 |
Method | Sensitivity | Specificity | Accuracy | AUC |
---|---|---|---|---|
HDNN [12] | 0.417 | 0.972 | 0.911 | 0.908 |
Method using global and local features [10] | 0.745 | 0.863 | 0.850 | 0.883 |
Ensemble Method [11] | 0.759 | 0.869 | 0.857 | 0.906 |
Patch-Based Attention Method [16] | 0.888 | 0.562 | 0.625 | 0.838 |
CE + CovD Contrastive Loss | 0.942 | 0.747 | 0.769 | 0.925 |
CE + CovD Triplet Loss | 0.917 | 0.767 | 0.784 | 0.912 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Guo, L.; Xie, G.; Xu, X.; Ren, J. Effective Melanoma Recognition Using Deep Convolutional Neural Network with Covariance Discriminant Loss. Sensors 2020, 20, 5786. https://doi.org/10.3390/s20205786
Guo L, Xie G, Xu X, Ren J. Effective Melanoma Recognition Using Deep Convolutional Neural Network with Covariance Discriminant Loss. Sensors. 2020; 20(20):5786. https://doi.org/10.3390/s20205786
Chicago/Turabian StyleGuo, Lei, Gang Xie, Xinying Xu, and Jinchang Ren. 2020. "Effective Melanoma Recognition Using Deep Convolutional Neural Network with Covariance Discriminant Loss" Sensors 20, no. 20: 5786. https://doi.org/10.3390/s20205786
APA StyleGuo, L., Xie, G., Xu, X., & Ren, J. (2020). Effective Melanoma Recognition Using Deep Convolutional Neural Network with Covariance Discriminant Loss. Sensors, 20(20), 5786. https://doi.org/10.3390/s20205786