Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers
release_2m6wl4xsrbeixiysjgwgkld2xu
by
Baihan Lin
Abstract
Inspired by the adaptation phenomenon of neuronal firing, we propose the regularity normalization (RN) as an unsupervised attention mechanism (UAM) which computes the statistical regularity in the implicit space of neural networks under the Minimum Description Length (MDL) principle. Treating the neural network optimization process as a partially observable model selection problem, the regularity normalization constrains the implicit space by a normalization factor, the universal code length. We compute this universal code incrementally across neural network layers and demonstrate the flexibility to include data priors such as top-down attention and other oracle information. Empirically, our approach outperforms existing normalization methods in tackling limited, imbalanced and non-stationary input distribution in image classification, classic control, procedurally-generated reinforcement learning, generative modeling, handwriting generation and question answering tasks with various neural network architectures. Lastly, the unsupervised attention mechanisms is a useful probing tool for neural networks by tracking the dependency and critical learning stages across layers and recurrent time steps of deep networks.
In application/xml+jats
format
Archived Files and Locations
application/pdf 12.6 MB
file_hzxgodewxvbjjlemelwyofejrq
|
mdpi-res.com (publisher) web.archive.org (webarchive) |
Web Captures
https://www.mdpi.com/1099-4300/24/1/59/htm
2022-08-07 21:23:02 | 68 resources webcapture_pqyueuhovvgkhgjdtjuwkunocy
|
web.archive.org (webarchive) |
Open Access Publication
In DOAJ
In ISSN ROAD
In Keepers Registry
ISSN-L:
1099-4300
access all versions, variants, and formats of this works (eg, pre-prints)
Crossref Metadata (via API)
Worldcat
SHERPA/RoMEO (journal policies)
wikidata.org
CORE.ac.uk
Semantic Scholar
Google Scholar