Zero-shot text-to-image generation

A Ramesh, M Pavlov, G Goh, S Gray… - International …, 2021 - proceedings.mlr.press
International conference on machine learning, 2021proceedings.mlr.press
Text-to-image generation has traditionally focused on finding better modeling assumptions
for training on a fixed dataset. These assumptions might involve complex architectures,
auxiliary losses, or side information such as object part labels or segmentation masks
supplied during training. We describe a simple approach for this task based on a transformer
that autoregressively models the text and image tokens as a single stream of data. With
sufficient data and scale, our approach is competitive with previous domain-specific models …
Abstract
Text-to-image generation has traditionally focused on finding better modeling assumptions for training on a fixed dataset. These assumptions might involve complex architectures, auxiliary losses, or side information such as object part labels or segmentation masks supplied during training. We describe a simple approach for this task based on a transformer that autoregressively models the text and image tokens as a single stream of data. With sufficient data and scale, our approach is competitive with previous domain-specific models when evaluated in a zero-shot fashion.
proceedings.mlr.press