Generative priors for magnetic resonance (MR) images have been used in a number of medical image analysis applications. Due to the plethora of deep learning methods based on 2D medical images, it would be beneficial to have a generator trained on complete, high-resolution 2D head MR slices from multiple orientations and multiple contrasts. In this work, we trained a StyleGAN3-T model for head MR slices for T1 and T2-weighted contrasts on public data. We restricted the training corpus of this model to slices from 1mm isotropic volumes corresponding to three standard radiological views with set pre-processing steps. In order to retain full applicability to downstream tasks, we did not skull-strip the images. Several analyses of the trained network, including examination of qualitative samples, interpolation of latent codes, and style mixing, demonstrate the expressivity of the network. Images from this network can be used for a variety of downstream tasks. The weights are open-sourced and are available at https://gitlab.com/iacl/high-res-mri-head-slice-gan.
|