Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 4224))

Abstract

When applying independent component analysis (ICA), sometimes that the connections between the observed mixtures and the recovered independent components (or the original sources) to be sparse, to make the interpretation easier or to reduce the model complexity. In this paper we propose natural gradient algorithms for ICA with a sparse separation matrix, as well as ICA with a sparse mixing matrix. The sparsity of the matrix is achieved by applying certain penalty functions to its entries. The properties of the penalty functions are investigated. Experimental results on both artificial data and causality discovery in financial stocks show the usefulness of the proposed methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  • Amari, S.: Natural gradient works efficiently in learning. Neural Computation 10, 251–276 (1998)

    Article  Google Scholar 

  • Amari, S., Cichocki, A., Yang, H.H.: A new learning algorithm for blind signal separation. In: Advances in Neural Information Processing Systems (1996)

    Google Scholar 

  • Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statist. Assoc. 96, 1348–1360 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  • Hyvärinen, A., Karhunen, J., Oja, E.: Independent Component Analysis. John Wiley & Sons Inc., Chichester (2001)

    Book  Google Scholar 

  • Lehmann, E.L.: Theory of Point Estimation. John Wiley & Sons, Inc, Chichester (1983)

    MATH  Google Scholar 

  • Lewicki, M., Sejnowski, T.J.: Learning nonlinear overcomplete represenations for efficient coding. In: Advances in Neural Information Processing Systems, vol. 10, pp. 815–821 (1998)

    Google Scholar 

  • Pham, D.T., Garat, P.: Blind separation of mixture of independent sources through a quasi-maximum likelihood approach. IEEE Trans. on Signal Processing 45(7), 1712–1725 (1997)

    Article  MATH  Google Scholar 

  • Shimizu, S., Hoyer, P.O., Hyvarinen, A., Kerminen, A.J.: A linear non-Gaussian acyclic model for causal discovery. Submitted to Journal of Machine Learning Research (2006)

    Google Scholar 

  • Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society 58(1), 267–288 (1996)

    MATH  MathSciNet  Google Scholar 

  • Weigend, A.S., Rumelhart, D.E., Huberman, B.A.: Generalization by weight elimination with application to forecasting. Advances in Neural Information Processing Systems 3 (1991)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zhang, K., Chan, LW. (2006). ICA with Sparse Connections. In: Corchado, E., Yin, H., Botti, V., Fyfe, C. (eds) Intelligent Data Engineering and Automated Learning – IDEAL 2006. IDEAL 2006. Lecture Notes in Computer Science, vol 4224. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11875581_64

Download citation

  • DOI: https://doi.org/10.1007/11875581_64

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-45485-4

  • Online ISBN: 978-3-540-45487-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics