2. 相互情報量関連論文(前々回の輪読で話したやつ)
• “Learning deep representations by mutual information estimation and maximization”
(ICLR2019)
• “Mutual Information Neural Estimates” (ICML2018)
• “Representation Learning with Contrastive Predictive Coding” (NIPS2018)
• “On variational lower bounds of mutual information” (NIPS2018, workshop)
• “Emergence of Invariance and Disentanglement in Deep Representations ” (JMLR)
• “Deep Variational Information Bottleneck” (ICLR2017)
• ” Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and
GANs by Constraining Information Flow” (ICLR2019, poster)
• “Fixing a Broken ELBO” (ICML2018)
• “MAE: Mutual Posterior-Divergence Regularization for Variational AutoEncoders”
(ICLR2019, poster)
• “EnGAN: Latent Space MCMC and Maximum Entropy Generators for Energy-based
Models” (ICLR2019, reject)
• “Deep Graph Info Max” (ICLR2019, poster)
• “Formal Limitations on the Measurement of Mutual Information” (ICLR2019 Reject) 2
メインで話す
少しだけ触れる
3. 相互情報量最大化による表現学習系の最近の文献
• “Learning Representations by Maximizing Mutual Information”, NIPS2019
• “On Variational Bounds of Mutual Infromation”, ICML2019
• “Greedy InforMax for Biologically Plausible Self-Supervised Representation Learning”,
NIPS2019
• “On Mutual Information Maximization for Representation Learning”
• “Region Mutual Information Loss for Semantic Segmentation”, NIPS2019
• (あとで追加)
3
4. Outline
• 背景:表現学習、相互情報量、対照推定
• 論文1:“Learning Representations by Maximizing Mutual
Information”, NIPS2019
• 論文2:“Greedy InfoMax for Biologically Plausible Self-
Supervised Representation Learning” (NIPS2019)
• 論文3:“On Mutual Information Maximization for Representation
Learning”
4
23. 余談:Tesla V100×8は人権
“We train our models using 4-8 standard Tesla
V100 GPUs per model. Other recent, strong
self-supervised models are nonreproducible on
standard hardware.”
23