Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3584376.3584566acmotherconferencesArticle/Chapter ViewAbstractPublication PagesricaiConference Proceedingsconference-collections
research-article

Brain MRI synthesis based on dual-generator generative adversarial network

Published: 19 April 2023 Publication History

Abstract

Various contrast Magnetic Resonance Imaging (MRI) of can increase the available information in clinical diagnosis. However, due to the time limit, cost, and the patient cooperation in the scanning process, some contrast images may not be obtained in time. Moreover, some images may be damaged during the sampling process. Therefore, the synthesis model is designed to obtain the missed contrast image from the existing image, which is beneficial to improve the effect of diagnosis. Hence, we propose a new brain MRI synthesis method based on generative adversarial networks (GAN). This method leverages a coarse-to-fine dual network structure based on the generative adversarial network and uses FID to make T2 weighted images generate more reliable T1 weighted images. We conducted experiments on 2730 T1 and T2 weighted images, and used structural similarity, peak signal-to-noise ratio, and mean square error as evaluation indicators. Quantitative and qualitative experimental results show that our method is superior to the original one.

References

[1]
Dar S U H, Yurt M, Karacan L, Image synthesis in multi-contrast MRI with conditional generative adversarial networks[J]. IEEE transactions on medical imaging, 2019, 38(10): 2375-2388.
[2]
Goodfellow I, Pouget-Abadie J, Mirza M, Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139-144.
[3]
Mirza M, Osindero S. Conditional generative adversarial nets[J]. arXiv preprint arXiv:1411.1784, 2014.
[4]
Isola P, Zhu J Y, Zhou T, Image-to-image translation with conditional adversarial networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1125-1134.
[5]
Choi Y, Choi M, Kim M, Stargan: Unified generative adversarial networks for multi-domain image-to-image translation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 8789-8797.
[6]
Huang X, Liu M Y, Belongie S, Multimodal unsupervised image-to-image translation[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 172-189.
[7]
Li J, Wang Y, Yang Y, Small animal PET to CT image synthesis based on conditional generation network[C]//2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI). IEEE, 2021: 1-6.
[8]
Kong L, Lian C, Huang D, Breaking the dilemma of medical image-to-image translation[J]. Advances in Neural Information Processing Systems, 2021, 34: 1964-1978.
[9]
Hassan Dar S U, Yurt M, Karacan L, Image synthesis in multi-contrast MRI with conditional generative adversarial networks[J]. arXiv e-prints, 2018: arXiv: 1802.01221.
[10]
Zheng Y, Huang D, Liu S, Cross-domain object detection through coarse-to-fine feature adaptation[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 13766-13775.
[11]
Nunn E J, Khadivi P, Samavi S. Compound Frechet Inception Distance for Quality Assessment of GAN Created Images[J]. arXiv preprint arXiv:2106.08575, 2021.
[12]
Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation[C]//International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015: 234-241.
[13]
Demir U, Unal G. Patch-based image inpainting with generative adversarial networks[J]. arXiv preprint arXiv:1803.07422, 2018.
[14]
Daoud A O, Tsehayae A A, Fayek A R. A guided evaluation of the impact of research and development partnerships on university, industry, and government[J]. Canadian Journal of Civil Engineering, 2017, 44(4): 253-263.
[15]
Alom M Z, Hasan M, Yakopcic C, Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation[J]. arXiv preprint arXiv:1802.06955, 2018.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
RICAI '22: Proceedings of the 2022 4th International Conference on Robotics, Intelligent Control and Artificial Intelligence
December 2022
1396 pages
ISBN:9781450398343
DOI:10.1145/3584376
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 19 April 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. FID
  2. dual network structure
  3. image generation

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

RICAI 2022

Acceptance Rates

Overall Acceptance Rate 140 of 294 submissions, 48%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 35
    Total Downloads
  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)1
Reflects downloads up to 13 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media