Regularity Normalization: Constraining Implicit Space with Minimum
Description Length
release_755pm6dgfbailivialkt4y5iyi
by
Baihan Lin
2019
Abstract
Inspired by the adaptation phenomenon of biological neuronal firing, we
propose regularity normalization: a reparameterization of the activation in the
neural network that take into account the statistical regularity in the
implicit space. By considering the neural network optimization process as a
model selection problem, the implicit space is constrained by the normalizing
factor, the minimum description length of the optimal universal code. We
introduce an incremental version of computing this universal code as normalized
maximum likelihood and demonstrated its flexibility to include data prior such
as top-down attention and other oracle information and its compatibility to be
incorporated into batch normalization and layer normalization. The preliminary
results showed that the proposed method outperforms existing normalization
methods in tackling the limited and imbalanced data from a non-stationary
distribution benchmarked on computer vision tasks. As an unsupervised attention
mechanism given input data, this biologically plausible normalization has the
potential to deal with other complicated real-world scenarios as well as
reinforcement learning setting where the rewards are sparse and non-uniform.
Further research is proposed to discover these scenarios and explore the
behaviors among different variants.
In text/plain
format
Archived Files and Locations
application/pdf 288.7 kB
file_5n52psa57ncurokzhqytv35oyu
|
arxiv.org (repository) web.archive.org (webarchive) |
1902.10658v5
access all versions, variants, and formats of this works (eg, pre-prints)