Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Mask-guided generative adversarial network for MRI-based CT synthesis

Published: 18 July 2024 Publication History

Abstract

Synthetic computed tomography (sCT) images from magnetic resonance imaging (MRI) data have broad applications in clinical medicine, including radiation oncology and surgical planning. With the development of deep learning technology in medical image analysis, convolution-based generative adversarial networks (GANs) have demonstrated their promising performance in synthesizing CT from MRI. However, many GAN variants tend to generate sCT images from MRI scans in an end-to-end manner, ignoring the distribution differences between different tissues and potentially leading to poor and unrealistic synthetic results. To solve this problem, we propose the MGDGAN, a mask-guided dual network based on GAN architecture for CT synthesis from MRI. Specifically, a mask that delineates the bone part (sBone) is first learned to guide the following synthesis, then the sBone and the soft-tissue part (sSoft-tissue) are synthesized through two parallel branches. Finally, the sCT image is obtained by the fusion of sBone and sSoft-tissue. Experimental results indicate that MGDGAN could generate sCT images with high accuracy in fine bone structure, brain tissue, and cerebral lesions, which are visually closer to the real CT (rCT) images. In quantitative evaluation, MGDGAN outperforms other state-of-the-art methods on multiple datasets, including CycleGAN, Pix2Pix, ECNN, cGAN9, APS and ResViT.

References

[1]
Iyer V.R., Lee S.I., MRI, CT, and PET/CT for ovarian cancer detection and adnexal lesion characterization, Am. J. Roentgenol. 194 (2010) 311–321.
[2]
Ukimura O., Girvan M., Image-guided surgery in minimally invasive urology, Curr. Opin. Urol. 20 (2010) 136–140.
[3]
Luo Y., Wei M., Li S., Ling J., et al., An effective co-support guided analysis model for multi-contrast MRI reconstruction, IEEE J. Biomed. Health Inf. 27 (2023) 2477–2488.
[4]
Devic S., MRI simulation for radiotherapy treatment planning, Med. Phys. 39 (2012) 6701–6711.
[5]
S.P. Ang, S.L. Phung, M. Field, M.M. Schira, An improved deep learning framework for MR-to-CT image synthesis with a new hybrid objective function, in: 2022 IEEE 19th International Symposium on Biomedical Imaging, ISBI, 2022, pp. 1–5.
[6]
Albert J.M., Radiation risk from CT: Implications for cancer screening, Am. J. Roentgenol. 201 (2013) W81–W87.
[7]
Johnstone E., Wyatt J.J., Henry A.M., Short S.C., Sebag-Montefiore D., Murray L., Speight R., Systematic review of synthetic computed tomography generation methodologies for use in magn. res. imaging–only radiation therapy, Int. J. Radiat. Oncol.* Biol.* Phys. 100 (2018) 199–217.
[8]
Edmund J.M., Kjer H.M., Van Leemput K., Hansen R.H., Andersen J.A., Andreasen D., A voxel-based investigation for MRI-only radiotherapy of the brain using ultra short echo times, Phys. Med. Biol. 59 (2014) 7501.
[9]
Berker Y., Franke J., Salomon A., Palmowski M., Donker H.C., Temur Y., Schulz V., MRI-based attenuation correction for hybrid PET/MRI systems: A 4-class tissue segmentation technique using a combined ultrashort-echo-time/Dixon MRI sequence, J. Nucl. Med. 53 (2012) 796–804.
[10]
Han X., TU-AB-BRA-02: An efficient Atlas-based synthetic CT generation method, Med. Phys. 43 (2016) 3733.
[11]
Arabi H., Koutsouvelis N., Rouzaud M., Miralbell R., Zaidi H., Atlas-guided generation of pseudo-CT images for MRI-only and hybrid PET–MRI-guided radiotherapy treatment planning, Phys. Med. Biol. 61 (2016) 6531.
[12]
Huynh T., Gao Y., Kang J., Wang L., Zhang P., Lian J., Shen D., Estimating CT image from MRI data using structured random forest and auto-context model, IEEE Trans. Med. Imaging 35 (2015) 174–183.
[13]
L. Zhong, L. Lin, Z. Lu, Y. Wu, Z. Lu, M. Huang, Q. Feng, Predict CT image from MRI data using KNN-regression with learned local descriptors, in: 2016 IEEE 13th International Symposium on Biomedical Imaging, ISBI, 2016, pp. 743–746.
[14]
Edmund J.M., Nyholm T., A review of substitute CT generation for MRI-only radiation therapy, Radiat. Oncol. 12 (2017) 1–15.
[15]
Sjölund J., Forsberg D., Andersson M., Knutsson H., Generating patient specific pseudo-CT of the head from MR using atlas-based regression, Phys. Med. Biol. 60 (2015) 825.
[16]
Han T., Nijkamp E., Zhou L., Pang B., Zhu S., Wu Y., Joint training of variational auto-encoder and latent energy-based model, in: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, Computer Vision Foundation / IEEE, 2020, pp. 7975–7984.
[17]
Cui J., Wu Y., Han T., Learning joint latent space EBM prior model for multi-layer generator, in: IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, IEEE, 2023, pp. 3603–3612.
[18]
Zhu J., Xu G., Hu Q., et al., Dual contrastive training and transferability aware adaptation for multi-source privacy-preserving motor imagery classification, IEEE Trans. Instrum. Meas. (2023).
[19]
Bahrami A., Karimian A., Fatemizadeh E., Arabi H., Zaidi H., A new deep convolutional neural network design with efficient learning capability: Application to CT image synthesis from MRI, Med. Phys. 47 (2020) 5158–5171.
[20]
Dinkla A.M., Wolterink J.M., Maspero M., Savenije M.H., Verhoeff J.J., Seravalli E., van den Berg C.A., MR-only brain radiation therapy: Dosimetric evaluation of synthetic CTs generated by a dilated convolutional neural network, Int. J. Radiat. Oncol.* Biol.* Phys. 102 (2018) 801–812.
[21]
Goodfellow I., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Bengio Y., Generative adversarial networks, Commun. ACM 63 (2020) 139–144.
[22]
X. Liu, J. Pan, X. Li, X. Wei, et al., Attention Based Cross-Domain Synthesis and Segmentation From Unpaired Medical Images, IEEE Trans. Emerg. Top. Comput. Intell.
[23]
Han X., MR-based synthetic CT generation using a deep convolutional neural network method, Med. Phys. 44 (2017) 1408–1419.
[24]
Nie D., Trullo R., Lian J., Wang L., Petitjean C., Ruan S., Shen D., Medical image synthesis with deep convolutional adversarial networks, IEEE Trans. Biomed. Eng. 65 (2018) 2720–2730.
[25]
Luo Y., Huang Q., Ling J., Lin K., Zhou T., Local and global knowledge distillation with direction-enhanced contrastive learning for single-image deraining, Knowl.-Based Syst. 268 (2023).
[26]
Zhu J., Bai W., Zhao J., Zuo L., Zhou T., Li K., Variational mode decomposition and sample entropy optimization based transformer framework for cloud resource load prediction, Knowl.-Based Syst. 280 (2023).
[27]
Zhao B., Cheng T., Zhang X., et al., CT synthesis from MR in the pelvic area using residual transformer conditional GAN, Comput. Med. Imaging Graph. 103 (2023).
[28]
Dalmaz O., Yurt M., Çukur T., ResViT: Residual vision transformers for multimodal medical image synthesis, IEEE Trans. Med. Imaging 41 (2022) 2598–2614.
[29]
Özbey M., Dalmaz O., Dar S.U.H., et al., Unsupervised medical image translation with adversarial diffusion models, IEEE Trans. Med. Imaging 42 (2023) 3524–3539.
[30]
Schad L.R., Blüml S., Hawighorst H., Wenz F., Lorenz W.J., Radiosurgical treatment planning of brain metastases based on a fast, three-dimensional MR imaging technique, Magn. Res. Imaging 12 (1994) 811–819.
[31]
Eilertsen K., Nilsen Tor Arne Vestad L., Geier O., Skretting A., A simulation of MRI based dose calculations on the basis of radiotherapy planning CT images, Acta Oncol. 47 (2008) 1294–1302.
[32]
Karotki A., Mah K., Meijer G., Meltsner M., Comparison of bulk electron density and voxel-based electron density treatment planning, J. Appl. Clin. Med. Phys. 12 (2011) 97–104.
[33]
Lee Y.K., Bollet M., Charles-Edwards G., Flower M.A., Leach M.O., McNair H., Webb S., Radiotherapy treatment planning of prostate cancer using magnetic resonance imaging alone, Radiother. Oncol. 66 (2003) 203–216.
[34]
Ukimura O., Girvan M., 3T MR-based treatment planning for radiotherapy of brain lesions, Radiol. Oncol. 40 (2006).
[35]
Stanescu T., Jans H.S., Pervez N., Stavrev P., Fallone B.G., A study on the magnetic resonance imaging (MRI)-based radiation treatment planning of intracranial lesions, Phys. Med. Biol. 53 (2008) 3579.
[36]
Demol B., Boydev C., Korhonen J., Reynaert N., Dosimetric characterization of MRI-only treatment planning for brain tumors in atlas-based pseudo-CT images generated from standard T1-weighted MR images, Med. Phys. 43 (2016) 6557–6568.
[37]
Uh J., Merchant T.E., Li Y., Li X., Hua C., MRI-based treatment planning with pseudo CT generated through atlas registration, Med. Phys. 41 (2014).
[38]
Arabi H., Zaidi H., Magnetic resonance imaging-guided attenuation correction in whole-body PET/MRI using a sorted atlas approach, Med. Image Anal. 31 (2016) 1–15.
[39]
Yang W., Zhong L., Chen Y., Lin L., Lu Z., Liu S., Chen W., Predicting CT image from MRI data through feature matching with learned nonlinear local descriptors, IEEE Trans. Med. Imaging 37 (2018) 977–987.
[40]
Wu Y., Yang W., Lu L., Lu Z., Zhong L., Huang M., Chen W., Prediction of CT substitutes from MR images based on local diffeomorphic mapping for brain PET attenuation correction, J. Nucl. Med. 57 (2016) 1635–1641.
[41]
Song Y., Zhang A., Zhou J., et al., Overlapping cytoplasms segmentation via constrained multi-shape evolution for cervical cancer screening, Artif. Intell. Med. 148 (2024).
[42]
Zheng L., Luo Y., Zhou Z., Ling J., Yue G., CDINet: Content distortion interaction network for blind image quality assessment, IEEE Trans. Multimed. (2024).
[43]
Luo Y., You B., Yue G., Ling J., Pseudo-supervised low-light image enhancement with mutual learning, IEEE Trans. Circuits Syst. Video Technol. (2023).
[44]
Yurt M., Dalmaz O., Dar S., et al., Semi-supervised learning of MRI synthesis without fully-sampled ground truths, IEEE Trans. Med. Imaging 41 (2022) 3895–3906.
[45]
Yurt M., Dar S.U.H., Erdem A., et al., MustGAN: Multi-stream generative adversarial networks for MR image synthesis, Med. Image Anal. 70 (2021).
[46]
Kearney V., Ziemer B.P., Perry A., Wang T., Chan J.W., Ma L., Solberg T.D., Attention-aware discrimination for MR-to-CT image translation using cycle-consistent generative adversarial networks, Radiol.: Artif. Intell. 2 (2020).
[47]
Z. Shi, P. Mettes, G. Zheng, C. Snoek, Frequency-Supervised MR-to-CT Image Synthesis, in: Deep Generative Models, and Data Augmentation, Labelling, and Imperfections: First Workshop, DGM4MICCAI 2021, and First Workshop, DALI 2021, Held in Conjunction with MICCAI 2021, 2021, pp. 3–13.
[48]
F. Milletari, N. Navab, S.A. Ahmadi, V-net: Fully convolutional neural networks for volumetric medical image segmentation, in: 2016 Fourth International Conference on 3D Vision, 3DV, 2016, pp. 565–571.
[49]
Simonyan K., Zisserman A., Very deep convolutional networks for large-scale image recognition, 2014, arXiv e-prints, arXiv:1409.1556.
[50]
X. Mao, Q. Li, H. Xie, R.Y. Lau, Z. Wang, S. Paul Smolley, Least squares generative adversarial networks, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2794–2802.
[51]
Xu Q., Ma Z., Na H.E., Duan W., DCSAU-Net: A deeper and more compact split-attention U-Net for medical image segmentation, Comput. Biol. Med. 154 (2023).
[52]
F. Yu, V. Koltun, T. Funkhouser, Dilated residual networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 472–480.
[53]
Johnson K.A., Alex B., Whole brain atlas, 2020, http://www.med.harvard.edu/AANLIB/home.html.
[54]
Ranjan A., Lalwani D., Misra R., GAN for synthesizing CT from T2-weighted MRI data towards MR-guided radiation treatment, Magn. Reson. Mater. Phys. Biol. Med. 35 (2022) 449–457.
[55]
Avants A., Brian B., Tustison N., Song G., et al., Advanced normalization tools (ANTS), Insight J. 2 (2009) 1–35.
[56]
C. Li, M. Wand, Precomputed real-time texture synthesis with Markovian generative adversarial networks, in: Computer Vision–ECCV 2016: 14th European Conference, 2010, pp. 702–716.
[57]
Wang Z., Bovik A.C., Sheikh H.R., Simoncelli E.P., Image quality assessment: From error visibility to structural similarity, IEEE Trans. Image Process. 13 (2004) 600–612.
[58]
Heusel M., Ramsauer H., Unterthiner T., Nessler B., Hochreiter S., GANs trained by a two time-scale update rule converge to a local nash equilibrium, Advances in Neural Information Processing Systems, vol. 30, 2017.
[59]
J.Y. Zhu, T. Park, P. Isola, A.A. Efros, Unpaired image-to-image translation using cycle-consistent adversarial networks, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2223–2232.
[60]
P. Isola, J.Y. Zhu, T. Zhou, A.A. Efros, Image-to-image translation with conditional adversarial networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5967–5976.
[61]
S.P. Ang, S.L. Phung, M. Field, et al., An improved deep learning framework for MR-to-CT image synthesis with a new hybrid objective function, in: 2022 IEEE 19th International Symposium on Biomedical Imaging, ISBI, 2022, pp. 1–5.
[62]
Hu J., Yu W., Yu Z., Image-to-image translation with conditional-GAN, in: CS230: Deep Learning, 2018, pp. 199–217.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Knowledge-Based Systems
Knowledge-Based Systems  Volume 295, Issue C
Jul 2024
880 pages

Publisher

Elsevier Science Publishers B. V.

Netherlands

Publication History

Published: 18 July 2024

Author Tags

  1. MR-to-CT image synthesis
  2. Deep learning
  3. Mask-guided
  4. Dual network
  5. Generative adversarial network

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 23 Sep 2024

Other Metrics

Citations

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media