Abstract
Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) are two widely used medical imaging techniques in radiology to form pictures of the anatomy and the human body’s physiological processes. Radiotherapy planning requires the use of CT as well as MR images. The high radiation exposure of CT and the cost of acquiring multiple modalities motivate a reliable MRI-to-CT synthesis. The MRI-to-CT synthesiser introduced in this paper implements a deep learning model called Edge-aware Generative Adversarial Network (EaGAN). This model includes edge information into the traditional Generative Adversarial Network (GAN) using the Sobel operator to compute edge maps along two directions. Doing this allows us to focus on the structure of the image and the boundary lines present in the image. Three variants of the EaGAN model are trained and tested. A model called discriminator-induced EaGAN (dEaGAN) that adversarially learns the edge information in the images is proven to generate the best results. It possesses a Mean Absolute Error (MAE) of 67.13HU, a Peak Signal to Noise Ratio (PSNR) of 30.340 dB, and a Structural Similarity Index (SSIM) of 0.969. The proposed model outperforms the state-of-the-art models and generates CT images closer to the ground truth CTs. The synthesised CTs are beneficial in medical diagnostic and treatment purposes to a greater extent.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ang, S.P., Phung, S.L., Field, M., Schira, M.M.: An improved deep learning framework for MR-to-CT image synthesis with a new hybrid objective function. In: 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), pp. 1–5. IEEE (2022). https://doi.org/10.1109/ISBI52829.2022.9761546
Bahrami, A., Karimian, A., Fatemizadeh, E., Arabi, H., Zaidi, H.: A new deep convolutional neural network design with efficient learning capability: application to CT image synthesis from MRI. Med. Phys. 47(10), 5158–5171 (2020). https://doi.org/10.1002/mp.14418
Bajger, M., et al.: Lumbar spine CT synthesis from MR images using CycleGAN-a preliminary study. In: 2021 Digital Image Computing: Techniques and Applications (DICTA), pp. 1–8. IEEE (2021). https://doi.org/10.1109/DICTA52665.2021.9647237
Bird, D., et al.: Multicentre, deep learning, synthetic-CT generation for ano-rectal MR-only radiotherapy treatment planning. Radiother. Oncol. 156, 23–28 (2021). https://doi.org/10.1016/j.radonc.2020.11.027
Chen, S., Qin, A., Zhou, D., Yan, D.: U-Net-generated synthetic CT images for magnetic resonance imaging-only prostate intensity-modulated radiation therapy treatment planning. Med. Phys. 45(12), 5659–5665 (2018)
Florkow, M.C., et al.: MRI-based synthetic CT shows equivalence to conventional CT for the morphological assessment of the hip joint. J. Orthop. Res.® 40(4), 954–964 (2022). https://doi.org/10.1002/jor.25127
Goodfellow, I.J., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, vol. 3, pp. 2672–2680, January 2014. https://doi.org/10.3156/jsoft.29.5_177_2
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, December 2016, pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90
Hiasa, Y., et al.: Cross-modality image synthesis from unpaired data using CycleGAN. In: Gooya, A., Goksel, O., Oguz, I., Burgos, N. (eds.) SASHIMI 2018. LNCS, vol. 11037, pp. 31–41. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00536-8_4
Huynh, T., et al.: Estimating CT image from MRI data using structured random forest and auto-context model. IEEE Trans. Med. Imaging 35(1), 174–183 (2016). https://doi.org/10.1109/TMI.2015.2461533
Islam, K.T., Wijewickrema, S., O’Leary, S.: A deep learning framework for segmenting brain tumors using MRI and synthetically generated CT images. Sensors 22(2), 523 (2022). https://doi.org/10.3390/s22020523
Jiangtao, W., Xinhong, W., Xiao, J., Bing, Y., Lei, Z., Yidong, Y.: MRI to CT synthesis using contrastive learning. In: 2021 IEEE International Conference on Medical Imaging Physics and Engineering (ICMIPE), pp. 1–5. IEEE (2021). https://doi.org/10.1109/ICMIPE53131.2021.9698888
Lee, J., Carass, A., Jog, A., Zhao, C., Prince, J.L.: Multi-atlas-based CT synthesis from conventional MRI with patch-based refinement for MRI-based radiotherapy planning. In: Medical Imaging 2017: Image Processing, vol. 10133, p. 101331I. International Society for Optics and Photonics (2017). https://doi.org/10.1117/12.2254571
Nie, D., et al.: Medical image synthesis with deep convolutional adversarial networks. IEEE Trans. Biomed. Eng. 65(12), 2720–2730 (2018). https://doi.org/10.1109/TBME.2018.2814538
Olberg, S., et al.: Abdominal synthetic CT reconstruction with intensity projection prior for MRI-only adaptive radiotherapy. Phys. Med. Biol. 66(20), 204001 (2021). https://doi.org/10.1088/1361-6560/ac279e
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Tao, L., Fisher, J., Anaya, E., Li, X., Levin, C.S.: Pseudo CT image synthesis and bone segmentation from MR images using adversarial networks with residual blocks for MR-based attenuation correction of brain PET data. IEEE Trans. Radiat. Plasma Med. Sci. 5(20), 193–201 (2021). https://doi.org/10.1109/trpms.2020.2989073
Wolterink, J.M., Dinkla, A.M., Savenije, M.H.F., Seevinck, P.R., van den Berg, C.A.T., Išgum, I.: Deep MR to CT synthesis using unpaired data. In: Tsaftaris, S.A., Gooya, A., Frangi, A.F., Prince, J.L. (eds.) SASHIMI 2017. LNCS, vol. 10557, pp. 14–23. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68127-6_2
Yang, H., et al.: Unsupervised MR-to-CT synthesis using structure-constrained CycleGAN. IEEE Trans. Med. Imaging 39(12), 4249–4261 (2020). https://doi.org/10.1109/TMI.2020.3015379
Yu, B., Zhou, L., Wang, L., Shi, Y., Fripp, J., Bourgeat, P.: Ea-GANs: edge-aware generative adversarial networks for cross-modality MR image synthesis. IEEE Trans. Med. Imaging 38(7), 1750–1762 (2019). https://doi.org/10.1109/TMI.2019.2895894
Zijlstra, F., et al.: CT synthesis from MR images for orthopedic applications in the lower arm using a conditional generative adversarial. arXiv, vol. 10949, p. 109491J. International Society for Optics and Photonics (2019). https://doi.org/10.1117/12.2512857
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Joseph, J., Prasanth, R., Maret, S.A., Pournami, P.N., Jayaraj, P.B., Puzhakkal, N. (2023). CT Image Synthesis from MR Image Using Edge-Aware Generative Adversarial Network. In: Gupta, D., Bhurchandi, K., Murala, S., Raman, B., Kumar, S. (eds) Computer Vision and Image Processing. CVIP 2022. Communications in Computer and Information Science, vol 1776. Springer, Cham. https://doi.org/10.1007/978-3-031-31407-0_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-31407-0_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-31406-3
Online ISBN: 978-3-031-31407-0
eBook Packages: Computer ScienceComputer Science (R0)