Video-to-Video Synthesis
Ting-Chun Wang1 Ming-Yu Liu1 Jun-Yan Zhu2 Guilin Liu1 Andrew Tao1 Jan Kautz1 Bryan Catanzaro1
1NVIDIA Corporation 2MIT
[Paper(full)] [arXiv] [Video] [Code]
Abstract
We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image synthesis problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without understanding temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a novel video-to-video synthesis approach under the generative adversarial learning framework. Through carefully-designed generator and discriminator architectures, coupled with a spatial-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results on a diverse set of input formats including segmentation masks, sketches, and poses. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, which significantly advances the state-of-the-art of video synthesis. Finally, we apply our approach to future video prediction, outperforming several state-of-the-art competing systems.
Paper
arXiv, 2018.
Citation
Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. "Video-to-Video Synthesis", in NeurIPS, 2018. Bibtex
Code: Pytorch
Our Example Results
|
Semantic Labels → Cityscapes Street Views
|
|
Face → Edge → Face
|
|
Body → Pose → Body
Frame Prediction
Acknowledgement
We thank Karan Sapra, Fitsum Reda, and Matthieu Le for generating the segmentation maps for us. We also thank Lisa Rhee and Miss Ketsuki for allowing us to use their dance videos for training. We thank William S. Peebles for proofreading the paper.
Citation
If you find this useful for your research, please use the following.
@inproceedings{wang2018vid2vid,
title={Video-to-Video Synthesis},
author={Ting-Chun Wang and Ming-Yu Liu and Jun-Yan Zhu and Guilin Liu and Andrew Tao and Jan Kautz and Bryan Catanzaro},
booktitle={Conference on Neural Information Processing Systems (NeurIPS)},
year={2018}
}