Mutual information neural estimation
We argue that the estimation of mutual information between high dimensional continuous
random variables can be achieved by gradient descent over neural networks. We present a
Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well
as in sample size, trainable through back-prop, and strongly consistent. We present a
handful of applications on which MINE can be used to minimize or maximize mutual
information. We apply MINE to improve adversarially trained generative models. We also …
random variables can be achieved by gradient descent over neural networks. We present a
Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well
as in sample size, trainable through back-prop, and strongly consistent. We present a
handful of applications on which MINE can be used to minimize or maximize mutual
information. We apply MINE to improve adversarially trained generative models. We also …
Mine: mutual information neural estimation
We argue that the estimation of mutual information between high dimensional continuous
random variables can be achieved by gradient descent over neural networks. We present a
Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well
as in sample size, trainable through back-prop, and strongly consistent. We present a
handful of applications on which MINE can be used to minimize or maximize mutual
information. We apply MINE to improve adversarially trained generative models. We also …
random variables can be achieved by gradient descent over neural networks. We present a
Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well
as in sample size, trainable through back-prop, and strongly consistent. We present a
handful of applications on which MINE can be used to minimize or maximize mutual
information. We apply MINE to improve adversarially trained generative models. We also …