Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Swin-UMamba: Mamba-Based UNet with ImageNet-Based Pretraining

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2024 (MICCAI 2024)

Abstract

Accurate medical image segmentation demands the integration of multi-scale information, spanning from local features to global dependencies. However, it is challenging for existing methods to model long-range global information, where convolutional neural networks are constrained by their local receptive fields, and vision transformers suffer from high quadratic complexity of their attention mechanism. Recently, Mamba-based models have gained great attention for their impressive ability in long sequence modeling. Several studies have demonstrated that these models can outperform popular vision models in various tasks, offering higher accuracy, lower memory consumption, and less computational burden. However, existing Mamba-based models are mostly trained from scratch and do not explore the power of pretraining, which has been proven to be quite effective for data-efficient medical image analysis. This paper introduces a novel Mamba-based model, Swin-UMamba, designed specifically for medical image segmentation tasks, leveraging the advantages of ImageNet-based pretraining. Our experimental results reveal the vital role of ImageNet-based training in enhancing the performance of Mamba-based models. Swin-UMamba demonstrates superior performance with a large margin compared to CNNs, ViTs, and latest Mamba-based models. Notably, on AbdomenMRI, Encoscopy, and Microscopy datasets, Swin-UMamba outperforms its closest counterpart U-Mamba by an average score of 2.72%. The code and models of Swin-UMamba are publicly available at: https://github.com/Jiarun-Liu/Swin-UMamba.

J. Liu and H. Yang—Contributed equally.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/JiarunLiu/Swin-UMamba.

References

  1. Allan, M., et al.: 2017 robotic instrument segmentation challenge. arXiv preprint arXiv:1902.06426 (2019)

  2. Bai, W., et al.: A population-based phenome-wide association study of cardiac and aortic structure and function. Nat. Med. 26(10), 1654–1662 (2020)

    Article  Google Scholar 

  3. Cao, H., et al.: Swin-Unet: Unet-like pure transformer for medical image segmentation. In: Computer Vision - ECCV 2022 Workshops, pp. 205–218 (2023). https://doi.org/10.1007/978-3-031-25066-8_9

  4. Gu, A., Dao, T.: Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752 (2023)

  5. Gu, A., Goel, K., Re, C.: Efficiently modeling long sequences with structured state spaces. In: International Conference on Learning Representations (2021)

    Google Scholar 

  6. Guo, J., Zhou, H.Y., Wang, L., Yu, Y.: UNet-2022: exploring dynamics in non-isomorphic architecture. In: Medical Imaging and Computer-Aided Diagnosis, pp. 465–476. Springer, Cham (2023). https://doi.org/10.1007/978-981-16-6775-6_38

  7. Han, K., et al.: A survey on vision transformer. IEEE Trans. Pattern Anal. Mach. Intell. 45(1), 87–110 (2022)

    Article  Google Scholar 

  8. Hatamizadeh, A., Nath, V., Tang, Y., Yang, D., Roth, H.R., Xu, D.: Swin UNETR: swin transformers for semantic segmentation of brain tumors in MRI images. In: International MICCAI Brainlesion Workshop, pp. 272–284. Springer, Cham (2021). https://doi.org/10.1007/978-3-031-08999-2_22

  9. Hatamizadeh, A., et al.: UNETR: transformers for 3d medical image segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 574–584 (2022)

    Google Scholar 

  10. Hatamizadeh, A., Yin, H., Heinrich, G., Kautz, J., Molchanov, P.: Global context vision transformers. In: International Conference on Machine Learning. pp. 12633–12646. PMLR (2023)

    Google Scholar 

  11. Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18(2), 203–211 (2021)

    Article  Google Scholar 

  12. Ji, Y., et al.: AMOS: a large-scale abdominal multi-organ benchmark for versatile medical image segmentation. In: Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (2022)

    Google Scholar 

  13. Lee, C.Y., Xie, S., Gallagher, P., Zhang, Z., Tu, Z.: Deeply-supervised nets. In: Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, pp. 562–570. PMLR (2015). ISSN: 1938-7228

    Google Scholar 

  14. Li, C., Li, W., Liu, C., Zheng, H., Cai, J., Wang, S.: Artificial intelligence in multiparametric magnetic resonance imaging: a review. Med. Phys. 49(10), e1024–e1054 (2022)

    Article  Google Scholar 

  15. Lin, T., Wang, Y., Liu, X., Qiu, X.: A survey of transformers. AI Open (2022)

    Google Scholar 

  16. Liu, Y., et al.: VMamba: visual state space model. arXiv preprint arXiv:2401.10166 (2024)

  17. Luo, W., Li, Y., Urtasun, R., Zemel, R.: Understanding the effective receptive field in deep convolutional neural networks. Advances in neural information processing systems 29 (2016)

    Google Scholar 

  18. Ma, J., Li, F., Wang, B.: U-mamba: enhancing long-range dependency for biomedical image segmentation. arXiv preprint arXiv:2401.04722 (2024)

  19. Ma, J., et al.: The multi-modality cell segmentation challenge: towards universal solutions. arXiv preprint arXiv:2308.05864 (2023)

  20. Mei, X., et al.: Artificial intelligence–enabled rapid diagnosis of patients with COVID-19. Nature Med. 26(8), 1224–1228 (2020)

    Google Scholar 

  21. Myronenko, A.: 3D MRI brain tumor segmentation using autoencoder regularization. In: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, pp. 311–320 (2019)

    Google Scholar 

  22. Qi, K., Yang, H., Li, C., Liu, Z., Wang, M., Liu, Q., Wang, S.: X-Net: brain stroke lesion segmentation based on depthwise separable convolution and long-range dependencies. In: Shen, D., Liu, T., Peters, T.M., Staib, L.H., Essert, C., Zhou, S., Yap, P.-T., Khan, A. (eds.) MICCAI 2019. LNCS, vol. 11766, pp. 247–255. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32248-9_28

    Chapter  Google Scholar 

  23. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  24. Sinha, A., Dolz, J.: Multi-scale self-guided attention for medical image segmentation. IEEE J. Biomed. Health Inform. 25(1), 121–130 (2021)

    Article  Google Scholar 

  25. Sun, H., et al.: AUNet: attention-guided dense-upsampling networks for breast mass segmentation in whole mammograms. Phys. Med. Biol. 65(5), 055005 (2020)

    Google Scholar 

  26. Tang, H., et al.: Clinically applicable deep learning framework for organs at risk delineation in CT images. Nature Mach. Intell. 1(10), 480–491 (2019)

    Article  Google Scholar 

  27. Tang, H., Zhang, C., Xie, X.: Automatic pulmonary lobe segmentation using deep learning. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 1225–1228. IEEE (2019)

    Google Scholar 

  28. Wang, S., et al.: Annotation-efficient deep learning for automatic medical image segmentation. Nat. Commun. 12(1), 5915 (2021)

    Article  Google Scholar 

  29. Xing, Z., Ye, T., Yang, Y., Liu, G., Zhu, L.: SegMamba: long-range sequential modeling mamba for 3d medical image segmentation. arXiv preprint arXiv:2401.13560 (2024)

  30. Yang, H., Huang, W., Qi, K., Li, C., Liu, X., Wang, M., Zheng, H., Wang, S.: CLCI-Net: cross-level fusion and context inference networks for lesion segmentation of chronic stroke. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11766, pp. 266–274. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32248-9_30

    Chapter  Google Scholar 

  31. Zhou, H.Y., et al.: nnFormer: volumetric medical image segmentation via a 3D transformer. IEEE Trans. Image Process. 32, 4036–4045 (2023)

    Article  Google Scholar 

  32. Zhou, Y., Huang, W., Dong, P., Xia, Y., Wang, S.: D-UNet: a dimension-fusion u shape network for chronic stroke lesion segmentation. IEEE/ACM Trans. Comput. Biol. Bioinf. 18(3), 940–950 (2021)

    Article  Google Scholar 

  33. Zhu, L., Liao, B., Zhang, Q., Wang, X., Liu, W., Wang, X.: Vision Mamba: efficient visual representation learning with bidirectional state space model. arXiv preprint arXiv:2401.09417 (2024)

Download references

Acknowledgments

This research was partly supported by the National Key R&D Program of China (2023YFA1011400), National Natural Science Foundation of China (62222118, U22A2040), Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (2022B1212010-011), Shenzhen Science and Technology Program (RCYX20210706092104034, JCYJ20220531100213029), the major key project of Peng Cheng Laboratory under grant PCL2023AS1-2, Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province (2023B1212060052), and Youth Innovation Promotion Association CAS.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Hong-Yu Zhou or Shanshan Wang .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 8139 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, J. et al. (2024). Swin-UMamba: Mamba-Based UNet with ImageNet-Based Pretraining. In: Linguraru, M.G., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2024. MICCAI 2024. Lecture Notes in Computer Science, vol 15009. Springer, Cham. https://doi.org/10.1007/978-3-031-72114-4_59

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72114-4_59

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72113-7

  • Online ISBN: 978-3-031-72114-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics