Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unsupervised Attention Mechanism across Neural Network Layers release_rev_03426eaa-f69d-4cb7-b136-498f4ac9d6c4

by Baihan Lin

Released as a article .

2019  

Abstract

Inspired by the adaptation phenomenon of neuronal firing, we propose an unsupervised attention mechanism (UAM) which computes the statistical regularity in the implicit space of neural networks under the Minimum Description Length (MDL) principle. Treating the neural network optimization process as a partially observable model selection problem, UAM constrained the implicit space by a normalization factor, the universal code length. We compute this universal code incrementally across neural network layers and demonstrated the flexibility to include data priors such as top-down attention and other oracle information. Empirically, our approach outperforms existing normalization methods in tackling limited, imbalanced and nonstationary input distribution in computer vision and reinforcement learning tasks. Lastly, UAM tracks dependency and critical learning stages across layers and recurrent time steps of deep networks.
In text/plain format

Type  article
Stage   submitted
Date   2019-07-25
Version   v6
Language   en ?
arXiv  1902.10658v6
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Revision

This is a specific, static metadata record, not necessarily linked to any current entity in the catalog.

Catalog Record
Revision: 03426eaa-f69d-4cb7-b136-498f4ac9d6c4
API URL: JSON