Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Semi-supervised Detection, Identification and Segmentation for Abdominal Organs

  • Conference paper
  • First Online:
Fast and Low-Resource Semi-supervised Abdominal Organ Segmentation (FLARE 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13816))

  • 868 Accesses

Abstract

Abdominal organ segmentation is an important prerequisite in many medical image analysis applications. Methods based on U-Net have demonstrated their scalability and achieved great success in different organ segmentation tasks. However, the limited number of data and labels hinders the training process of these methods. Moreover, traditional U-Net models based on convolutional neural networks suffer from limited receptive fields. Lacking the ability to model long-term dependencies from a global perspective, these methods are prone to produce false positive predictions. In this paper, we propose a new semi-supervised learning algorithm based on the vision transformer to overcome these challenges. The overall architecture of our method consists of three stages. In the first stage, we tackle the abdomen region location problem via a lightweight segmentation network. In the second stage, we adopt a vision transformer model equipped with a semi-supervised learning strategy to detect different abdominal organs. In the final stage, we attach multiple organ-specific segmentation networks to automatically segment organs from their bounding boxes. We evaluate our method on MICCAI FLARE 2022 challenge dataset. Experimental results demonstrate the effectiveness of our method. Our segmentation results currently achieve 0.897 mean DSC on the leaderboard of FLARE 2022 validation set.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Antonelli, M., et al.: The medical segmentation decathlon. arXiv abs/2106.05735 (2021)

    Google Scholar 

  2. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 213–229. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_13

    Chapter  Google Scholar 

  3. Chen, X., Yuan, Y., Zeng, G., Wang, J.: Semi-supervised semantic segmentation with cross pseudo supervision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2613–2622 (2021)

    Google Scholar 

  4. Drozdzal, M., Vorontsov, E., Chartrand, G., Kadoury, S., Pal, C.: The importance of skip connections in biomedical image segmentation. In: Carneiro, G., et al. (eds.) LABELS/DLMIA -2016. LNCS, vol. 10008, pp. 179–187. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46976-8_19

    Chapter  Google Scholar 

  5. French, G., Laine, S., Aila, T., Mackiewicz, M., Finlayson, G.: Semi-supervised semantic segmentation needs strong, varied perturbations. arXiv preprint arXiv:1906.01916 (2019)

  6. Heller, N., et al.: An international challenge to use artificial intelligence to define the state-of-the-art in kidney and kidney tumor segmentation in ct imaging. Proc. Am. Soc. Clin. Oncol. 38(6), 626–626 (2020)

    Article  Google Scholar 

  7. Isensee, F., et al.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18, 203–211 (2021)

    Article  Google Scholar 

  8. Li, X., Chen, H., Qi, X., Dou, Q., Fu, C.W., Heng, P.A.: H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans. Med. Imaging 37(12), 2663–2674 (2018)

    Article  Google Scholar 

  9. Ma, J., et al.: AbdomenCT-1K: is abdominal organ segmentation a solved problem. IEEE Trans. Pattern Anal. Mach. Intell. 44(10), 6695–6714 (2022)

    Google Scholar 

  10. Oktay, O., et al.: Attention U-Net: learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 (2018)

  11. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library, vol. 32 (2019)

    Google Scholar 

  12. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  13. Tang, Y., et al.: Self-supervised pre-training of swin transformers for 3D medical image analysis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20730–20740 (2022)

    Google Scholar 

  14. Wua, Y., et al.: Mutual consistency learning for semi-supervised medical image segmentation. arXiv preprint arXiv:2109.09960 (2021)

Download references

Acknowledgements

The authors of this paper declare that the segmentation method they implemented for participation in the FLARE 2022 challenge has not used any pre-trained models nor additional datasets other than those provided by the organizers. The proposed solution is fully automatic without any manual intervention.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Heng Guo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sun, M., Jiang, Y., Guo, H. (2022). Semi-supervised Detection, Identification and Segmentation for Abdominal Organs. In: Ma, J., Wang, B. (eds) Fast and Low-Resource Semi-supervised Abdominal Organ Segmentation. FLARE 2022. Lecture Notes in Computer Science, vol 13816. Springer, Cham. https://doi.org/10.1007/978-3-031-23911-3_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-23911-3_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-23910-6

  • Online ISBN: 978-3-031-23911-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics