Reenactgan: Learning to reenact faces via boundary transfer

W Wu, Y Zhang, C Li, C Qian… - Proceedings of the …, 2018 - openaccess.thecvf.com
Proceedings of the European conference on computer vision (ECCV), 2018openaccess.thecvf.com
We present a novel learning-based framework for face reenactment. The proposed method,
known as ReenactGAN, is capable of transferring facial movements and expressions from
an arbitrary person's monocular video input to a target person's video. Instead of performing
a direct transfer in the pixel space, which could result in structural artifacts, we first map the
source face onto a boundary latent space. A transformer is subsequently used to adapt the
source face's boundary to the target's boundary. Finally, a target-specific decoder is used to …
Abstract
We present a novel learning-based framework for face reenactment. The proposed method, known as ReenactGAN, is capable of transferring facial movements and expressions from an arbitrary person’s monocular video input to a target person’s video. Instead of performing a direct transfer in the pixel space, which could result in structural artifacts, we first map the source face onto a boundary latent space. A transformer is subsequently used to adapt the source face’s boundary to the target’s boundary. Finally, a target-specific decoder is used to generate the reenacted target face. Thanks to the effective and reliable boundary-based transfer, our method can perform photo-realistic face reenactment. In addition, ReenactGAN is appealing in that the whole reenactment process is purely feed-forward, and thus the reenactment process can run in real-time (30 FPS on one GTX 1080 GPU). Dataset and model are publicly available on our project page.
openaccess.thecvf.com