Abstract
Brain tumors play a crucial role in medical diagnosis and treatment planning. Extracting tumor information from MRI images is essential but can be challenging due to the limitations and intricacy of manual delineation. This paper presents a brain tumor image segmentation framework that addresses these challenges by leveraging multiple sequence information. The framework consists of encoder, decoder, and data fusion modules. The encoder incorporates Bi-ConvLSTM and Transformer models, enabling comprehensive utilization of both local and global details in each sequence. The decoder module employs a lightweight MLP architecture. Additionally, we propose a data fusion module that integrates self-supervised multi-sequence segmentation results. This module learns the weights of each sequence prediction result in an end-to-end manner, ensuring robust fusion results. Experimental validation on the BRATS 2018 dataset demonstrates the excellent performance of the proposed automatic segmentation framework for brain tumor images. Comparative analysis with other multi-sequence fusion segmentation models reveals that our framework achieves the highest Dice score in each region. To provide a more comprehensive background, it is important to highlight the significance of brain tumors in medical diagnosis and treatment planning. Brain tumors can have serious implications for patients, affecting their overall health and well-being. Accurate segmentation of brain tumors from MRI images is crucial for assessing tumor size, location, and characteristics, which in turn informs treatment decisions and prognosis. Currently, manual delineation of brain tumors from MRI images is a time-consuming and labor-intensive process prone to inter-observer variability. Automating this segmentation task using advanced image processing techniques can significantly improve efficiency and reliability.
This research was funded by Scientific Research Fund of Zhejiang Provincial Education Department, Grant/Award Number: Y202147323, National Natural Science Foundation of China under Grant NSFC-61803148, basic scientific research business cost project of Heilongjiang provincial undergraduate universities in 2020 (YJSCX2020-212HKD) and basic scientific research business cost project of Heilongjiang provincial undergraduate universities in 2022 (2022-KYYWF-0565).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Kamnitsas, K., Ledig, C., Newcombe, V.F., Simpson, J.P., Kane, A.D., Menon, D.K., Rueckert, D., Glocker, B.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 36, 61–78 (2017)
Su, C.-H., Chung, P.-C., Lin, S.-F., Tsai, H.-W., Yang, T.-L., Su, Y.-C.: Multi-scale attention convolutional network for Masson stained bile duct segmentation from liver pathology images. Sensors 22, 2679 (2022)
Fu, X., Zeng, D., Huang, Y., Zhang, X.P., Ding, X.: A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016
Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Byeon, W., Breuel, T.M., Raue, F., Liwicki, M.: Scene labeling with LSTM recurrent neural networks. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, pp. 3547–3555, 8–10 June 2015
Le, T.H.N., Gummadi, R., Savvides, M.: Deep recurrent level set for segmenting brain tumors. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11072, pp. 646–653. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00931-1_74
Chen, J.N., et al.:TransUNet: transformers make strong encoders for medical image segmentation. arXiv, arXiv:2102.04306, https://arxiv.org/abs/2102.04306 (2021)
Cao, H., et al.: Swin-Unet: unet-like pure transformer for medical image segmentation. arXiv arXiv:2105.05537, https://arxiv.org/abs/2105.05537 (2021)
Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, pp. 9992–10002, 10–17 October 2021
Wang, W., Chen, C., Ding, M., Yu, H., Zha, S., Li, J.: TransBTS: multimodal brain tumor segmentation using transformer. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12901, pp. 109–119. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87193-2_11
Jia, Q.; Shu, H. BiTr-Unet: a CNN-transformer combined network for MRI brain tumor segmentation. arXiv arXiv:2109.12271 (2021)
Hatamizadeh, A., et al.: UNETR: transformers for 3d medical image segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Wai-koloa, HI, USA, vol. 4–8, pp. 574–584 (2022)
Song, H., Wang, W., Zhao, S., Shen, J., Lam, K.: Pyramid dilated deeper ConvLSTM for video salient object detection. In: Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Proceedings of the Neural Information Processing System (NIPS), Harrahs and Harveys, Lake Tahoe, NV, USA, Vol. 2, pp. 1097–1105, 3–8 December 2012
Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34, 1993–2024 (2015)
Wang, G., Li, W., Ourselin, S., Vercauteren, T.: Automatic brain tumor segmentation using cascaded anisotropic convolutional neural networks. In: Proceedings of the International MICCAI Brainlesion Workshop, Quebec City, QC, Canada, 10–14 September 2017
Li, Q., Yu, Z., Wang, Y., Zheng, H.: TumorGAN: a multi-modal data augmentation framework for brain tumor segmentation. Sensors 20, 4203 (2020)
Kamnitsas, K., et al.: Ensembles of multiple models and architectures for robust brain tumour segmentation. arXiv arXiv:1711.01468 (2017)
Xing, Z., Yu, L., Wan, L., Han, T., Zhu, L.: NestedFormer: nested modality-aware transformer for brain tumor segmentation. In: Proceedings of the International MICCAI Brainlesion Workshop, Singapore, vol. 18–22, pp. 273–283 (2022)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhang, G., Shi, J., Liu, W., Zhang, G., He, Y. (2024). Research on Automatic Segmentation Algorithm of Brain Tumor Image Based on Multi-sequence Self-supervised Fusion in Complex Scenes. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Communications in Computer and Information Science, vol 1964. Springer, Singapore. https://doi.org/10.1007/978-981-99-8141-0_3
Download citation
DOI: https://doi.org/10.1007/978-981-99-8141-0_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8140-3
Online ISBN: 978-981-99-8141-0
eBook Packages: Computer ScienceComputer Science (R0)