Mixed Maximum Loss Design for Optic Disc and Optic Cup Segmentation with Deep Learning from Imbalanced Samples
Abstract
:1. Introduction
2. Related Work
2.1. Maximal Loss Minimization Learning Strategy
2.2. Deep Learning with Maximal Loss Minimization
3. Methods
3.1. U-Shaped Convolutional Neural Network with Multi-Kernel
3.2. Mixed Maximum Loss Minimization Training Strategy
Algorithm 1 Mixed Maximum Loss Minimization Training for MSMKU |
Require:, Randomly initialize network parameters and get While has not converged do For t = 1, 2, …, 200 Sort by big to small and get the sample set End for End while |
4. Experiments
4.1. Datasets and Evaluation Method
4.2. Experimental Setup
4.3. Experimental Results
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Tham, Y.C.; Li, X.; Wong, T.Y.; Quigley, H.A.; Aung, T.; Cheng, C.Y. Global prevalence of glaucoma and projections of glaucoma burden through 2040: A systematic review and meta-analysis. Ophthalmology 2014, 121, 2081–2090. [Google Scholar] [CrossRef]
- Harizman, N.; Oliveira, C.; Chiang, A.; Tello, C.; Marmor, M.; Ritch, R.; Liebmann, J.M. The isnt rule and differentiation of normal from glaucomatous eyes. Arch. Ophthalmol. 2006, 124, 1579–1583. [Google Scholar] [CrossRef]
- Stapor, K.; Switonski, A.; Chrastek, R.; Michelson, G. Segmentation of fundus eye images using methods of mathematical morphology forglaucoma diagnosis. In Proceedings of the International Conference on Computational Science, Kraków, Poland, 6–9 June 2004; pp. 41–48. [Google Scholar]
- Inoue, N.; Yanashima, K.; Magatani, K.; Kurihara, T. Development of a simple diagnostic method for the glaucoma using ocular fundus pictures. In Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, 17–18 January 2006; pp. 3355–3358. [Google Scholar]
- Zhu, X.; Rangayyan, R.M.; Ells, A.L. Detection of the optic nerve head in fundus images of the retina using the hough transform for circles. J. Digital. Imaging 2010, 23, 332–341. [Google Scholar] [CrossRef]
- Aquino, A.; Gegeundez-Arias, M.E.; Marín, D. Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques. IEEE Trans. Med. Imaging 2010, 29, 1860–1869. [Google Scholar] [CrossRef]
- Lowell, J.; Hunter, A.; Steel, D.; Basu, A.; Ryder, R.; Fletcher, E.; Kennedy, L. Optic nerve head segmentation. IEEE Trans. Med. Imaging 2004, 23, 256–264. [Google Scholar] [CrossRef]
- Joshi, G.D.; Sivaswamy, J.; Krishnadas, S. Optic disk and cup segmentation from monocular color retinal images for glaucoma assessment. IEEE Trans. Med. Imaging 2011, 30, 1192–1205. [Google Scholar] [CrossRef]
- Cheng, J.; Liu, J.; Xu, Y.; Yin, F.; Wong, D.W.K.; Tan, N.M.; Tao, D.; Cheng, C.Y.; Aung, T.; Wong, T.Y. Superpixel classification based optic disc and optic cup segmentation for glaucoma screening. IEEE Trans. Med. Imaging 2013, 32, 1019–1032. [Google Scholar] [CrossRef]
- Xu, Y.; Duan, L.; Lin, S.; Chen, X.; Wong, D.W.K.; Wong, T.Y.; Liu, J. Optic cup segmentation for glaucoma detection using lowrank superpixel representation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Boston, MA, USA, 14–18 September 2014; pp. 788–795. [Google Scholar]
- Wong, D.; Liu, J.; Lim, J.; Li, H.; Wong, T. Automated detection of kinks from blood vessels for optic cup segmentation in retinal images. In Proceedings of the Medical Imaging 2009: Computer-Aided Diagnosis, Lake Buena Vista, FL, USA, 7–12 February 2009; p. 72601J. [Google Scholar]
- Hu, M.; Zhu, C.; Li, X.; Xu, Y. Optic cup segmentation from fundus images for glaucoma diagnosis. Bioengineered 2017, 8, 21–28. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Neural Information Processing Systems, Doha, Qatar, 12–15 November 2012; pp. 1097–1105. [Google Scholar]
- Kutlu, H.; Avcı, E. A Novel Method for Classifying Liver and Brain Tumors Using Convolutional Neural Networks, Discrete Wavelet Transform and Long Short-Term Memory Networks. Sensors 2019, 19, 1992. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 142–158. [Google Scholar] [CrossRef]
- Alaskar, H.; Hussain, A.; Al-Aseem, N.; Liatsis, P.; Al-Jumeily, D. Application of Convolutional Neural Networks for Automated Ulcer Detection in Wireless Capsule Endoscopy Images. Sensors 2019, 19, 1265. [Google Scholar] [CrossRef] [PubMed]
- Hariharan, B.; Arbeláez, P.; Girshick, R.; Malik, J. Hypercolumns for object segmentation and fine-grained localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 447–456. [Google Scholar]
- Gu, Z.; Cheng, J.; Fu, H.; Zhou, K.; Hao, H.; Zhao, Y.; Zhang, T.; Gao, S.; Liu, J. CE-Net: Context Encoder Network for 2D Medical Image Segmentation. IEEE Trans. Med. Imaging 2019, 38, 2281–2291. [Google Scholar] [CrossRef] [PubMed]
- Sevastopolsky, A. Optic disc and cup segmentation methods for glaucoma detection with modification of u-net convolutional neural network. Pattern Recognit. Image Anal. 2017, 27, 618–624. [Google Scholar] [CrossRef]
- Zilly, J.; Buhmann, J.M.; Mahapatra, D. Glaucoma detection using entropy sampling and ensemble learning for automatic optic cup and disc segmentation. Comput. Med. Imaging Graph. 2017, 55, 28–41. [Google Scholar] [CrossRef] [PubMed]
- Fu, H.; Cheng, J.; Xu, Y.; Wong, D.W.K.; Liu, J.; Cao, X. Joint optic disc and cup segmentation based on multi-label deep network and polar transformation. IEEE Trans. Med. Imaging 2018, 37, 1597–1605. [Google Scholar] [CrossRef] [PubMed]
- Freund, Y.; Schapire, R.E. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. Int. 1997, 55, 119–139. [Google Scholar] [CrossRef]
- Vapnik, V. Statistical Learning Theory. In The Nature of Statistical Learning History, 2nd ed.; Springer-Verlag Inc.: New York, NY, USA, 2000; pp. 156–160. [Google Scholar]
- Shalev-Shwartz, S.; Wexler, Y. Minimizing the maximal loss: How and why. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; pp. 793–801. [Google Scholar]
- Fan, Y.; Lyu, S.; Ying, Y.; Hu, B. Learning with average top-k loss. In Proceedings of the Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 497–505. [Google Scholar]
- Katharopoulos, A.; Fleuret, F. Biased importance sampling for deep neural network training. arXiv 2017, arXiv:1706.00043. [Google Scholar]
- Li, H.; Gong, M. Self-paced convolutional neural networks. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, Melbourne, Australia, 19–25 August 2017; pp. 2110–2116. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef]
- Fumero, F.; Alayón, S.; Sanchez, J.L.; Sigut, J.; Gonzalez-Hernandez, M. Rim-one: An open retinal image database for optic nerve evaluation. In Proceedings of the 2011 24th International Symposium on Computer-Based Medical Systems (CBMS), Bristol, UK, 27–30 June 2011; pp. 1–6. [Google Scholar]
- Sivaswamy, J.; Krishnadas, S.; Chakravarty, A.; Joshi, G.; Tabish, A.S. A comprehensive retinal image dataset for the assessment of glaucoma from the optic nerve head analysis. JSM Biomed. Imaging Data Pap. 2015, 2, 1004. [Google Scholar]
- Chakravarty, A.; Sivaswamy, J. Coupled sparse dictionary for depth-based cup segmentation from single color fundus image. In Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention, Boston, MA, USA, 14–18 September 2014; pp. 747–754. [Google Scholar]
- Wong, D.; Liu, J.; Lim, J.; Jia, X.; Yin, F.; Li, H.; Wong, T. Levelset based automatic cup-to-disc ratio determination using retinal fundus images in argali. In Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 2266–2269. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Badrinarayanan , V.; Kendall , A.; Cipolla , R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Son, J.; Park, S.J.; Jung, K.H. Towards Accurate Segmentation of Retinal Vessels and the Optic Disc in Fundoscopic Images with Generative Adversarial Networks. J. Digital Imaging 2019, 32, 499–512. [Google Scholar] [CrossRef]
- Xu, Y.; Jia, X.; Hu, M.; Sun, X. Feature extraction from optic disc and cup boundary lines in fundus images based on isnt rule for glaucoma diagnosis. J. Med. Imaging Health Inf. 2015, 5, 1833–1838. [Google Scholar] [CrossRef]
Methods | RIM-ONE-V3 Dataset | DRISHTI-GS Dataset | ||
---|---|---|---|---|
Optic Disc | Optic Cup | Optic Disc | Optic Cup | |
Level-set method [32] | 0.883 | 0.726 | 0.911 | 0.771 |
Morphological method [6] | 0.901 | -- | 0.932 | -- |
Superpixel method [9] | 0.892 | 0.744 | 0.921 | 0.789 |
Superpixel method [10] | -- | 0.753 | -- | 0.791 |
MSMKU–MMLM | 0.956 | 0.856 | 0.978 | 0.892 |
Methods | Optic Disc | Optic Cup | ||||||
---|---|---|---|---|---|---|---|---|
F-Score | IoU | Sensitivity | Specificity | F-Score | IoU | Sensitivity | Specificity | |
Ensemble CNN [20] | 0.9420 | -- | -- | -- | 0.8240 | -- | -- | -- |
Small-scale U-Net [19] | 0.9359 | 0.8808 | 0.9502 | 0.9973 | 0.8128 | 0.6977 | 0.7545 | 0.9976 |
FCN [33] | 0.9508 | 0.9081 | 0.9494 | 0.9984 | 0.7973 | 0.6818 | 0.8011 | 0.9985 |
SegNet [34] | 0.9483 | 0.9080 | 0.9449 | 0.9985 | 0.8299 | 0.7250 | 0.8081 | 0.9967 |
GAN [35] | 0.9532 | 0.9122 | 09457 | 0.9987 | 0.8250 | 0.7165 | 0.8142 | 0.9965 |
CE-Net [18] | 0.9527 | 0.9115 | 0.9502 | 0.9986 | 0.8435 | 0.7424 | 0.8352 | 0.9970 |
M-Net [21] | 0.9526 | 0.9114 | 0.9481 | 0.9986 | 0.8348 | 0.7300 | 0.8146 | 0.9967 |
MSMKU–MMLM | 0.9561 | 0.9172 | 0.9521 | 0.9987 | 0.8564 | 0.7586 | 0.8515 | 0.9971 |
Methods | Optic Disc | Optic Cup | ||||||
---|---|---|---|---|---|---|---|---|
F-Score | IoU | Sensitivity | Specificity | F-Score | IoU | Sensitivity | Specificity | |
Ensemble CNN [20] | 0.9730 | -- | -- | -- | 0.8710 | -- | -- | -- |
Small-scale U-Net [19] | 0.9043 | 0.8350 | 0.9156 | 0.9969 | 0.8521 | 0.7515 | 0.8476 | 0.9881 |
FCN [33] | 0.9558 | 0.9188 | 0.9611 | 0.9988 | 0.8519 | 0.7590 | 0.8618 | 0.9857 |
SegNet [34] | 0.9680 | 0.9387 | 0.9652 | 0.9991 | 0.8712 | 0.7836 | 0.8957 | 0.9856 |
GAN [35] | 0.9527 | 0.9185 | 0.9747 | 0.9977 | 0.8643 | 0.7748 | 0.8539 | 0.9907 |
CE-Net [18] | 0.9642 | 0.9323 | 0.9759 | 0.9990 | 0.8818 | 0.8006 | 0.8819 | 0.9909 |
M-Net [21] | 0.9678 | 0.9386 | 0.9711 | 0.9991 | 0.8618 | 0.7730 | 0.8822 | 0.9862 |
MSMKU–MMLM | 0.9780 | 0.9496 | 0.9792 | 0.9994 | 0.8921 | 0.8232 | 0.9157 | 0.9989 |
Methods | RIM-ONE-V3 Dataset | DRISHTI-GS Dataset | ||
---|---|---|---|---|
OD | OC | OD | OC | |
Joint MSU | 0.949 | 0.825 | 0.974 | 0.863 |
Independent MSU | 0.952 | 0.827 | 0.975 | 0.869 |
Two-stage MSU | 0.952 | 0.831 | 0.975 | 0.875 |
MSMKU–MLM | 0.953 | 0.847 | 0.972 | 0.884 |
MSMKU–ALM | 0.955 | 0.849 | 0.979 | 0.883 |
MSMKU–MMLM | 0.956 | 0.856 | 0.978 | 0.892 |
Methods | RIM-ONE-V3 Dataset | DRISHTI-GS Dataset | ||
---|---|---|---|---|
VCDR Difference | AUC | VCDR Difference | AUC | |
Small-scale U-Net [19] | 0.067 | 0.832 | 0.081 | 0.800 |
FCN [33] | 0.071 | 0.815 | 0.091 | 0.788 |
SegNet [34] | 0.072 | 0.768 | 0.079 | 0.769 |
GAN [35] | 0.063 | 0.803 | 0.091 | 0.748 |
CE-Net [18] | 0.059 | 0.864 | 0.076 | 0.751 |
M-Net [21] | 0.059 | 0.821 | 0.092 | 0.728 |
MSMKU-MMLM | 0.051 | 0.882 | 0.054 | 0.901 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xu, Y.-l.; Lu, S.; Li, H.-x.; Li, R.-r. Mixed Maximum Loss Design for Optic Disc and Optic Cup Segmentation with Deep Learning from Imbalanced Samples. Sensors 2019, 19, 4401. https://doi.org/10.3390/s19204401
Xu Y-l, Lu S, Li H-x, Li R-r. Mixed Maximum Loss Design for Optic Disc and Optic Cup Segmentation with Deep Learning from Imbalanced Samples. Sensors. 2019; 19(20):4401. https://doi.org/10.3390/s19204401
Chicago/Turabian StyleXu, Yong-li, Shuai Lu, Han-xiong Li, and Rui-rui Li. 2019. "Mixed Maximum Loss Design for Optic Disc and Optic Cup Segmentation with Deep Learning from Imbalanced Samples" Sensors 19, no. 20: 4401. https://doi.org/10.3390/s19204401
APA StyleXu, Y.-l., Lu, S., Li, H.-x., & Li, R.-r. (2019). Mixed Maximum Loss Design for Optic Disc and Optic Cup Segmentation with Deep Learning from Imbalanced Samples. Sensors, 19(20), 4401. https://doi.org/10.3390/s19204401