Abstract
Deep convolutional neural networks (CNNs) have achieved commendable results on a variety of medical image segmentation tasks. However, CNNs usually require a large amount of training samples with accurate annotations, which are extremely difficult and expensive to obtain in medical image analysis field. In practice, we notice that the junior trainees after training can label medical images in some medical image segmentation applications. These non-expert annotations are more easily accessible and can be regarded as a source of weak annotation to guide network learning. In this paper, we propose a novel Tri-network learning framework to alleviate the problem of insufficient accurate annotations in medical segmentation tasks by utilizing the non-expert annotations. To be specific, we maintain three networks in our framework, and each pair of networks alternatively select informative samples for the third network learning, according to the consensus and difference between their predictions. The three networks are jointly optimized in such a collaborative manner. We evaluated our method on real and simulated non-expert annotated datasets. The experiment results show that our method effectively mines informative information from the non-expert annotations for improved segmentation performance and outperforms other competing methods.
T. Zhang and L. Yu—Equal contribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Arpit, D., et al.: A closer look at memorization in deep networks 2017. arXiv preprint arXiv:1706.05394 (1938)
Ching, T., et al.: Opportunities and obstacles for deep learning in biology and medicine. J. Roy. Soc. Interf. 15(141), 20170387 (2018)
Dgani, Y., Greenspan, H., Goldberger, J.: Training a neural network based on unreliable human annotation of medical images. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 39–42. IEEE (2018)
Han, B., et al.: Co-teaching: robust training of deep neural networks with extremely noisy labels. In: Advances in Neural Information Processing Systems, pp. 8527–8537 (2018)
Jiang, L., Zhou, Z., Leung, T., Li, L.J., Fei-Fei, L.: MentorNet: learning data-driven curriculum for very deep neural networks on corrupted labels. arXiv preprint arXiv:1712.05055 (2017)
Kohli, M.D., Summers, R.M., Geis, J.R.: Medical image data and datasets in the era of machine learning–whitepaper from the 2016 C-MIMI meeting dataset session. J. Digit. Imaging 30(4), 392–399 (2017)
Lehtinen, J., et al.: Noise2noise: learning image restoration without clean data. arXiv preprint arXiv:1803.04189 (2018)
Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017)
Ma, X., et al.: Dimensionality-driven learning with noisy labels. arXiv preprint arXiv:1806.02612 (2018)
Mirikharaji, Z., Yan, Y., Hamarneh, G.: Learning to segment skin lesions from noisy annotations. In: Wang, Q., et al. (eds.) DART/MIL3ID - 2019. LNCS, vol. 11795, pp. 207–215. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33391-1_24
Patrini, G., Rozza, A., Krishna Menon, A., Nock, R., Qu, L.: Making deep neural networks robust to label noise: a loss correction approach. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1944–1952 (2017)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Shen, D., Wu, G., Suk, H.I.: Deep learning in medical image analysis. Ann. Rev. Biomed. Eng. 19, 221–248 (2017)
Shiraishi, J., et al.: Development of a digital image database for chest radiographs with and without a lung nodule. Am. J. Roentgenol. 174(1), 71–74 (2000)
Tanaka, D., Ikami, D., Yamasaki, T., Aizawa, K.: Joint optimization framework for learning with noisy labels. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5552–5560 (2018)
Xue, C., Dou, Q., Shi, X., Chen, H., Heng, P.A.: Robust learning at noisy labeled medical images: applied to skin lesion classification. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 1280–1283. IEEE (2019)
Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530 (2016)
Zhu, H., Shi, J., Wu, J.: Pick-and-learn: automatic quality evaluation for noisy-labeled image segmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11769, pp. 576–584. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32226-7_64
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, T., Yu, L., Hu, N., Lv, S., Gu, S. (2020). Robust Medical Image Segmentation from Non-expert Annotations with Tri-network. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12264. Springer, Cham. https://doi.org/10.1007/978-3-030-59719-1_25
Download citation
DOI: https://doi.org/10.1007/978-3-030-59719-1_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-59718-4
Online ISBN: 978-3-030-59719-1
eBook Packages: Computer ScienceComputer Science (R0)