Abstract
We present two new methods which extend the traditional sparse coding approach with supervised components. The goal of these extensions is to increase the suitability of the learned features for classification tasks while keeping most of their general representation performance. A special visualization is introduced which allows to show the principal effect of the new methods. Furthermore some first experimental results are obtained for the COIL-100 database.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Hoyer, P.: Non-negative Matrix Factorization with Sparseness Constraints. Journal of Machine Learning Research 5, 1457–1469 (2004)
Lee, D.L., Seung, S.: Learning the parts of objects by non-negative matrix factorization. Nature 401, 788–791 (1999)
Nayar, S.K., Nene, S.A., Murase, H.: Real-time 100 object recognition system. In: Proc. IEEE Conference on Robotics and Automation, vol. 3, pp. 2321–2325 (1996)
Olshausen, B., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996)
Talukder, A., Casasent, D.: Classification and Pose Estimation of Objects using Nonlinear Features. In: Proc. SPIE: Applications and Science of Computational Intelligence, vol. 3390, pp. 12–23 (1998)
Ullman, S., Bart, E.: Recognition invariance obtained by extended and invariant features. Neural Networks 17(1), 833–848 (2004)
Wersing, H., Körner, E.: Learning Optimized Features for Hierarchical Models of Invariant Object Recognition. Neural Computation 15(7), 1559–1588 (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Hasler, S., Wersing, H., Körner, E. (2005). Class-Specific Sparse Coding for Learning of Object Representations. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds) Artificial Neural Networks: Biological Inspirations – ICANN 2005. ICANN 2005. Lecture Notes in Computer Science, vol 3696. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11550822_74
Download citation
DOI: https://doi.org/10.1007/11550822_74
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28752-0
Online ISBN: 978-3-540-28754-4
eBook Packages: Computer ScienceComputer Science (R0)