Flexible Prior Distributions for Deep Generative Models

Y Kilcher, A Lucchi, T Hofmann - arXiv preprint arXiv:1710.11383, 2017 - arxiv.org
Y Kilcher, A Lucchi, T Hofmann
arXiv preprint arXiv:1710.11383, 2017arxiv.org
We consider the problem of training generative models with deep neural networks as
generators, ie to map latent codes to data points. Whereas the dominant paradigm combines
simple priors over codes with complex deterministic models, we argue that it might be
advantageous to use more flexible code distributions. We demonstrate how these
distributions can be induced directly from the data. The benefits include: more powerful
generative models, better modeling of latent structure and explicit control of the degree of …
We consider the problem of training generative models with deep neural networks as generators, i.e. to map latent codes to data points. Whereas the dominant paradigm combines simple priors over codes with complex deterministic models, we argue that it might be advantageous to use more flexible code distributions. We demonstrate how these distributions can be induced directly from the data. The benefits include: more powerful generative models, better modeling of latent structure and explicit control of the degree of generalization.
arxiv.org