Stargan v2: Diverse image synthesis for multiple domains

Y Choi, Y Uh, J Yoo, JW Ha - Proceedings of the IEEE/CVF …, 2020 - openaccess.thecvf.com
Proceedings of the IEEE/CVF conference on computer vision and …, 2020openaccess.thecvf.com
A good image-to-image translation model should learn a mapping between different visual
domains while satisfying the following properties: 1) diversity of generated images and 2)
scalability over multiple domains. Existing methods address either of the issues, having
limited diversity or multiple models for all domains. We propose StarGAN v2, a single
framework that tackles both and shows significantly improved results over the baselines.
Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate our superiority …
Abstract
A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains. Existing methods address either of the issues, having limited diversity or multiple models for all domains. We propose StarGAN v2, a single framework that tackles both and shows significantly improved results over the baselines. Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate our superiority in terms of visual quality, diversity, and scalability. To better assess image-to-image translation models, we release AFHQ, high-quality animal faces with large inter-and intra-domain differences. The code, pretrained models, and dataset are available at https://github. com/clovaai/stargan-v2.
openaccess.thecvf.com