Abstract
This paper presents a new framework extending previous work on multiple cause mixture models. We search for an optimal neural network codification of a given set of input patterns, which implies hidden cause extraction and redundancy elimination leading to a factorial code. We propose a new entropy measure whose maximization leads to both maximum information transmission and independence of internal representations for factorial input spaces in the absence of noise. No extra assumptions are needed, in contrast with previous models in which some information about the input space, such as the number of generators, must be known a priori.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Deco, G., Obradovic, D.: An information theoretic approach to neural computing (Springer-Verlag, New York, 1996) 65–107
Barlow, H.B., Kaushal, T.P., Mitchison, G.J.: Finding minimum entropy codes. Neural Computation 1 (1989) 412–423
Földiäk, P.: Forming sparse representations by local anti-Hebbian learning. Biological Cybernetics 64 (1990) 165–170
Saund, E.: A multiple cause mixture model for unsupervised learning. Neural Computation 7 (1995) 51–71
Dayan, P., Zemel, R.S.: Competition and multiple cause models. Neural Computation 7 (1995) 565–579
O’Reilly, R.C.: Generalization in interactive networks: the benefits of inhibitory competition and Hebbian learning. Neural Computation 13 (2001) 1199–1241
Levine, D.: Users guide to the PGAPack parallel genetic algorithm library (1996) http://www-fp.mcs.anl.gov/CCST/research/reportspre1998/compbio/stalk /pgapack.html
Hinton, G.E., Dayan, P., Frey, B.J., Neal, R.: The wake-sleep algorithm for unsupervised Neural Networks. Science 268 (1995) 1158–1161
Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401 (1999) 788–91
Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381 (1996) 607–609
Roweiss, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290 (2000) 2323–2326
Tenenbaum, J.B., de Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290 (2000) 2319–2323
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lago-Fernández, L.F., Corbacho, F. (2002). Optimal Extraction of Hidden Causes. In: Dorronsoro, J.R. (eds) Artificial Neural Networks — ICANN 2002. ICANN 2002. Lecture Notes in Computer Science, vol 2415. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46084-5_103
Download citation
DOI: https://doi.org/10.1007/3-540-46084-5_103
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-44074-1
Online ISBN: 978-3-540-46084-8
eBook Packages: Springer Book Archive