Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Abdomen Multi-organ Segmentation Using Pseudo Labels and Two-Stage

  • Conference paper
  • First Online:
Fast, Low-resource, and Accurate Organ and Pan-cancer Segmentation in Abdomen CT (FLARE 2023)

Abstract

Recently, the nnU-Net network had achieved excellent performance in many medical image segmentation tasks. However, it also had some obvious problems, such as being able to only perform fully supervised tasks, excessive resource consumption in the predict. Therefore, in the abdominal multi-organ challenge of FLARE23, only incomplete labeled data was provided, and the size of them was too large, which made the original nnU-Net difficult to run. Based on this, we had designed a framework that utilized generated pseudo labels and two-stage segmentation for fast and effective prediction. Specifically, we designed three nnU-Net, one for generating high-quality pseudo labels for unlabeled data, the other for generating coarse segmentation to guide cropping, and the third for achieving effective segmentation. Our method achieved an average DSC score of 88.87% and 38.00% for the organs and lesions on the validation set and the average running time and area under GPU memory-time cure are 45 s and 3000 MB, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017)

    Article  Google Scholar 

  2. Bilic, P., et al.: The liver tumor segmentation benchmark (LiTS). Med. Image Anal. 84, 102680 (2023)

    Article  Google Scholar 

  3. Cao, H., et al.: Swin-Unet: Unet-like pure transformer for medical image segmentation. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds.) ECCV 2022. LNCS, vol. 13803, pp. 205–218. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-25066-8_9

    Chapter  Google Scholar 

  4. Clark, K., et al.: The cancer imaging archive (TCIA): maintaining and operating a public information repository. J. Digit. Imaging 26(6), 1045–1057 (2013)

    Article  Google Scholar 

  5. Gatidis, S., et al.: The AutoPET challenge: towards fully automated lesion segmentation in oncologic PET/CT imaging. Preprint at Research Square (Nature Portfolio) (2023). https://doi.org/10.21203/rs.3.rs-2572595/v1

  6. Gatidis, S., et al.: A whole-body FDG-PET/CT dataset with manually annotated tumor lesions. Sci. Data 9(1), 601 (2022)

    Article  Google Scholar 

  7. Heller, N., et al.: The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: results of the KiTS19 challenge. Med. Image Anal. 67, 101821 (2021)

    Article  Google Scholar 

  8. Heller, N., et al.: An international challenge to use artificial intelligence to define the state-of-the-art in kidney and kidney tumor segmentation in CT imaging. Proc. Am. Soc. Clin. Oncol. 38(6), 626 (2020)

    Article  Google Scholar 

  9. Huang, Z., et al.: Revisiting nnU-net for iterative pseudo labeling and efficient sliding window inference. In: Ma, J., Wang, B. (eds.) FLARE 2022. LNCS, vol. 13816, pp. 178–189. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-23911-3_16

    Chapter  Google Scholar 

  10. Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnU-net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18(2), 203–211 (2021)

    Article  Google Scholar 

  11. Ma, J., He, Y., Li, F., Han, L., You, C., Wang, B.: Segment anything in medical images. Nat. Commun. 15, 654 (2024)

    Article  Google Scholar 

  12. Ma, J., et al.: Fast and low-GPU-memory abdomen CT organ segmentation: the flare challenge. Med. Image Anal. 82, 102616 (2022)

    Article  Google Scholar 

  13. Ma, J., et al.: Unleashing the strengths of unlabeled data in pan-cancer abdominal organ quantification: the FLARE22 challenge. arXiv preprint arXiv:2308.05862 (2023)

  14. Ma, J., et al.: AbdomenCT-1k: is abdominal organ segmentation a solved problem? IEEE Trans. Pattern Anal. Mach. Intell. 44(10), 6695–6714 (2022)

    Article  Google Scholar 

  15. Pavao, A., et al.: CodaLab competitions: an open source platform to organize scientific challenges. J. Mach. Learn. Res. 24(198), 1–6 (2023)

    Google Scholar 

  16. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  17. Simpson, A.L., et al.: A large annotated medical image dataset for the development and evaluation of segmentation algorithms. arXiv preprint arXiv:1902.09063 (2019)

  18. Wang, E., Zhao, Y., Wu, Y.: Cascade dual-decoders network for abdominal organs segmentation. In: Ma, J., Wang, B. (eds.) FLARE 2022. LNCS, vol. 13816, pp. 202–213. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-23911-3_18

    Chapter  Google Scholar 

  19. Wasserthal, J., et al.: TotalSegmentator: robust segmentation of 104 anatomic structures in CT images. Radiol.: Artif. Intell. 5(5), e230024 (2023)

    Google Scholar 

  20. Wu, M., Ding, W., Yang, M., Huang, L.: Multi-depth boundary-aware left atrial scar segmentation network. In: Zhuang, X., Li, L., Wang, S., Wu, F. (eds.) LAScarQS 2022. LNCS, vol. 13586, pp. 16–23. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-31778-1_2

    Chapter  Google Scholar 

  21. You, C., Xiang, J., Su, K., Zhang, X., Dong, S., Onofrey, J., Staib, L., Duncan, J.S.: Incremental learning meets transfer learning: application to multi-site prostate MRI segmentation. In: Albarqouni, S., et al. (eds.) DeCaF FAIR 2022. LNCS, vol. 13573, pp. 3–16. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-18523-6_1

    Chapter  Google Scholar 

  22. Yushkevich, P.A., Gao, Y., Gerig, G.: ITK-SNAP: an interactive tool for semi-automatic segmentation of multi-modality biomedical images. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 3342–3345 (2016)

    Google Scholar 

  23. Zhang, X., Yang, X., Huang, L., Huang, L.: Two stage of histogram matching augmentation for domain generalization: application to left atrial segmentation. In: Zhuang, X., Li, L., Wang, S., Wu, F. (eds.) LAScarQS 2022. LNCS, vol. 13586, pp. 60–68. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-31778-1_6

    Chapter  Google Scholar 

Download references

Acknowledgements

The authors of this paper declare that the segmentation method they implemented for participation in the FLARE 2023 challenge has not used any pre-trained models nor additional datasets other than those provided by the organizers. The proposed solution is fully automatic without any manual intervention. We thank all the data owners for making the CT scans publicly available and CodaLab [15] for hosting the challenge platform.

This work was supported by National Natural Science Foundation of China (62271149), Fujian Provincial Natural Science Foundation project (2021J02019, 2021J01578).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liqin Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yang, X., Zhang, X., Yan, X., Ding, W., Chen, H., Huang, L. (2024). Abdomen Multi-organ Segmentation Using Pseudo Labels and Two-Stage. In: Ma, J., Wang, B. (eds) Fast, Low-resource, and Accurate Organ and Pan-cancer Segmentation in Abdomen CT. FLARE 2023. Lecture Notes in Computer Science, vol 14544. Springer, Cham. https://doi.org/10.1007/978-3-031-58776-4_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-58776-4_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-58775-7

  • Online ISBN: 978-3-031-58776-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics