Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Constraining Implicit Space with Minimum Description Length: An Unsupervised Attention Mechanism across Neural Network Layers release_lyi32hgo5vcithbsaokdnsp7hq

by Baihan Lin

Released as a article .

2020  

Abstract

Inspired by the adaptation phenomenon of neuronal firing, we propose the regularity normalization (RN) as an unsupervised attention mechanism (UAM) which computes the statistical regularity in the implicit space of neural networks under the Minimum Description Length (MDL) principle. Treating the neural network optimization process as a partially observable model selection problem, UAM constrains the implicit space by a normalization factor, the universal code length. We compute this universal code incrementally across neural network layers and demonstrated the flexibility to include data priors such as top-down attention and other oracle information. Empirically, our approach outperforms existing normalization methods in tackling limited, imbalanced and non-stationary input distribution in image classification, classic control, procedurally-generated reinforcement learning, generative modeling, handwriting generation and question answering tasks with various neural network architectures. Lastly, UAM tracks dependency and critical learning stages across layers and recurrent time steps of deep networks.
In text/plain format

Archived Files and Locations

application/pdf  7.0 MB
file_uzhycufczbbpbi7rdpzqcobkm4
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-09-10
Version   v12
Language   en ?
arXiv  1902.10658v12
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 0b8a4ed4-aec1-4aea-9761-473a2d6a6f58
API URL: JSON