Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Automatic Nodule Segmentation Method for CT Images Using Aggregation-U-Net Generative Adversarial Networks

  • Original Paper
  • Published:
Sensing and Imaging Aims and scope Submit manuscript

Abstract

Nodule segmentation plays a vital role in the detection and diagnosis for lung cancer. Nevertheless, manual segmentation by radiologists can be time-consuming and labor-intensive. In recent years, deep learning methods have performed well on medical image segmentation. Generative adversarial networks (GAN) are often used for image generation. In this paper, GAN is introduced to perform image segmentation, and a network called Aggregation-U-net GAN (AUGAN) is proposed, which is applied to automatically segment nodules in chest computed tomography (CT) images. The generator, which is modified by combining U-net with deep aggregation, learns features of lung nodules and makes the segmentation close to the ground truth. We used a method called tissue augmentation that 300 patches from normal areas in chest CT scan of normal lungs were selected manually and superimposed over lesions. The final results showed that our approach is superior to others for nodule segmentation and its dice coefficient and hausdorff distance were significantly improved.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Wang, G., Yu, H. Y., & De Man, B. (2008). An outlook on X-ray CT research and development. Medical Physics, 35(3), 1051–1064. https://doi.org/10.1118/1.2836950.

    Article  Google Scholar 

  2. Armato, S. G., Giger, M. L., Moran, C. J., Blackburn, J. T., Doi, K., & MacMahon, H. (1999). Computerized detection of pulmonary nodules on CT scans. Radiographics, 19(5), 1303–1311. https://doi.org/10.1148/radiographics.19.5.g99se181303.

    Article  Google Scholar 

  3. Kostis, W. J., Reeves, A. P., Yankelevitz, D. F., & Henschke, C. I. (2003). Three-dimensional segmentation and growth-rate estimation of small pulmonary nodules in helical CT images. IEEE Transactions on Medical Imaging, 22(10), 1259–1274. https://doi.org/10.1109/Tmi.2003.817785.

    Article  Google Scholar 

  4. Kuhnigk, J. M., Dicken, V., Bornemann, L., Bakai, A., Wormanns, D., Krass, S., et al. (2006). Morphological segmentation and partial volume analysis for volumetry of solid pulmonary lesions in thoracic CT scans. IEEE Transactions on Medical Imaging, 25(4), 417–434. https://doi.org/10.1109/Tmi.2006.871547.

    Article  Google Scholar 

  5. Reeves, A. P., Chan, A. B., Yankelevitz, D. F., Henschke, C. I., Kressler, B., & Kostis, W. J. (2006). On measuring the change in size of pulmonary nodules. IEEE Transactions on Medical Imaging, 25(4), 435–450. https://doi.org/10.1109/Tmi.2006.871548.

    Article  Google Scholar 

  6. Chen, H., Zhang, Y., Kalra, M. K., Lin, F., Chen, Y., Liao, P. X., et al. (2017). Low-dose CT with a residual encoder–decoder convolutional neural network. IEEE Transactions on Medical Imaging, 36(12), 2524–2535. https://doi.org/10.1109/Tmi.2017.2715284.

    Article  Google Scholar 

  7. Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., et al. (2014). Generative adversarial nets. In Advances in neural information processing systems 27 (Nips 2014).

  8. Wang, W. X., Sun, Q., Fu, Y. W., Chen, T., Cao, C. J., Zheng, Z. Q., et al. (2019). Comp-GAN: Compositional generative adversarial network in synthesizing and recognizing facial expression. In Proceedings of the 27th ACM international conference on multimedia (Mm 2019) (pp. 211–219). https://doi.org/10.1145/3343031.3351032.

  9. Shi, Z. F., Liu, M. H., Cao, Q. J., Ren, H. Z., & Luo, T. (2019). A data augmentation method based on cycle-consistent adversarial networks for fluorescence encoded microsphere image analysis. Signal Processing, 161, 195–202. https://doi.org/10.1016/j.sigpro.2019.02.028.

    Article  Google Scholar 

  10. Isola, P., Zhu, J. Y., Zhou, T. H., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In 30th IEEE conference on computer vision and pattern recognition (Cvpr 2017) (pp. 5967–5976). https://doi.org/10.1109/Cvpr.2017.632.

  11. Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In 2017 IEEE international conference on computer vision (Iccv 2017) (pp. 2242–2251). https://doi.org/10.1109/Iccv.2017.244.

  12. Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., et al. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In 30th IEEE conference on computer vision and pattern recognition (Cvpr 2017) (pp. 105–114). https://doi.org/10.1109/Cvpr.2017.19.

  13. Ehsani, K., Mottaghi, R., & Farhadi, A. (2018). SeGAN: Segmenting and generating the invisible. In 2018 IEEE/Cvf conference on computer vision and pattern recognition (Cvpr 2018) (pp. 6144–6153). https://doi.org/10.1109/Cvpr.2018.00643.

  14. Liu, M. Y., & Tuzel, O. (2016). Coupled generative adversarial networks. In Advances in neural information processing systems 29 (Nips 2016).

  15. Lin, K., Li, D. Q., He, X. D., Zhang, Z. Y., & Sun, M. T. (2017). Adversarial ranking for language generation. In Advances in neural information processing systems 30 (Nips 2017).

  16. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training GANs. In Advances in neural information processing systems 29 (Nips 2016).

  17. Luc, P., Couprie, C., Chintala, S., & Verbeek, J. (2016). Semantic segmentation using adversarial networks. In NIPS workshop on adversarial training (Nips 2016).

  18. Dong, X., Lei, Y., Wang, T. H., Thomas, M., Tang, L., Curran, W. J., et al. (2019). Automatic multiorgan segmentation in thorax CT images using U-net-GAN. Medical Physics, 46(5), 2157–2168. https://doi.org/10.1002/mp.13458.

    Article  Google Scholar 

  19. Hong, Y., Hwang, U., Yoo, J., & Yoon, S. (2019). How generative adversarial networks and their variants work: An overview. ACM Computing Surveys. https://doi.org/10.1145/3301282.

    Article  Google Scholar 

  20. Mao, X. D., Li, Q., Xie, H. R., Lau, R. Y. K., Wang, Z., & Smolley, S. P. (2017). Least squares generative adversarial networks. In 2017 IEEE international conference on computer vision (Iccv 2017) (pp. 2813–2821). https://doi.org/10.1109/Iccv.2017.304.

  21. Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein GAN. In International conference on machine learning (Icml 2017) (pp. 214–223).

  22. Mroueh, Y., & Sercu, T. (2017). Fisher GAN. In Advances in neural information processing systems 30 (Nips 2017).

  23. Huang, X., Li, Y. X., Poursaeed, O., Hopcroft, J., & Belongie, S. (2017). Stacked generative adversarial networks. In 30th IEEE conference on computer vision and pattern recognition (Iccv 2017). https://doi.org/10.1109/CVPR.2017.202

  24. Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2018). Progressive growing of GANs for improved quality, stability, and variation. In International conference on learning representations (Iclr 2018).

  25. Ghosh, A., Kulharia, V., Namboodiri, V., Torr, P. H. S., & Dokania, P. K. (2018). Multi-agent diverse generative adversarial networks. In 2018 IEEE/Cvf conference on computer vision and pattern recognition (Cvpr 2018) (pp. 8513–8521). https://doi.org/10.1109/Cvpr.2018.00888.

  26. Metz, L., Poole, B., Pfau, D., & Sohl-Dickstein, J. (2017). Unrolled generative adversarial networks. In International conference on learning representations (Iclr 2017).

  27. Arora, S., Ge, R., Liang, Y., Ma, T., & Zhang, Y. (2017). Generalization and equilibrium in generative adversarial nets (GANs). In International conference for learning representations (Iclr 2017).

  28. Jin, D., Xu, Z., Harrison, A. P., & Mollura, D. J. (2018). White matter hyperintensity segmentation from T1 and FLAIR images using fully convolutional neural networks enhanced with residual connections. In 2018 IEEE 15th international symposium on biomedical imaging (Isbi 2018) (pp. 1060–1064). https://doi.org/10.1109/ISBI.2018.8363754

  29. Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324. https://doi.org/10.1109/5.726791.

    Article  Google Scholar 

  30. He, K. M., Zhang, X. Y., Ren, S. Q., & Sun, J. (2016). Identity mappings in deep residual networks. In Computer Vision—Eccv 2016, Pt Iv, 9908 (pp. 630–645). https://doi.org/10.1007/978-3-319-46493-0_38.

  31. Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Ning, Z., Tzeng, E., et al. (2014). DeCAF: A deep convolutional activation feature for generic visual recognition. In International conference on machine learning (Icml 2014) (pp. 647–655). https://doi.org/10.1097/00003643-201406001-00333

  32. Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks? Advances in Neural Information Processing Systems 27 (Nips 2014), 27.

  33. Yu, F., Wang, D. Q., Shelhamer, E., & Darrell, T. (2018). Deep layer aggregation. In 2018 IEEE/Cvf conference on computer vision and pattern recognition (Cvpr 2018) (pp. 2403–2412). https://doi.org/10.1109/Cvpr.2018.00255.

  34. Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N., & Liang, J. (2019). UNet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Transactions on Medical Imaging, 39(6), 1856–1867. https://doi.org/10.1109/TMI.2019.2959609.

    Article  Google Scholar 

  35. Cho, K., Courville, A., & Bengio, Y. (2015). Describing multimedia content using attention-based encoder–decoder networks. IEEE Transactions on Multimedia, 17(11), 1875–1886. https://doi.org/10.1109/TMM.2015.2477044.

    Article  Google Scholar 

  36. Kooi, T., Ginneken, B. V., Karssemeijer, N., & Heeten, A. D. J. M. P. (2017). Discriminating solitary cysts from soft tissue lesions in mammography using a pretrained deep convolutional neural network. Medical Physics. https://doi.org/10.1002/mp.12110.

    Article  Google Scholar 

  37. Armato, S., Mclennan, G., Mcnitt-Gray, M., Meyer, C., Reeves, A., Bidaut, L., et al. (2010). WE-B-201B-02: The lung image database consortium (LIDC) and image database resource initiative (IDRI): A completed public database of CT scans for lung nodule analysis. Medical Physics, 37(6), 3416–3417. https://doi.org/10.1118/1.3469350.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zaifeng Shi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, Z., Hu, Q., Yue, Y. et al. Automatic Nodule Segmentation Method for CT Images Using Aggregation-U-Net Generative Adversarial Networks. Sens Imaging 21, 39 (2020). https://doi.org/10.1007/s11220-020-00304-4

Download citation

  • Received:

  • Revised:

  • Published:

  • DOI: https://doi.org/10.1007/s11220-020-00304-4

Keywords