Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/1390156.1390294acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmlConference Proceedingsconference-collections
research-article

Extracting and composing robust features with denoising autoencoders

Published: 05 July 2008 Publication History

Abstract

Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.

References

[1]
Bengio, Y. (2007). Learning deep architectures for AI (Technical Report 1312). Université de Montréal, dept. IRO.
[2]
Bengio, Y., Lamblin, P., Popovici, D., & Larochelle, H. (2007). Greedy layer-wise training of deep networks. Advances in Neural Information Processing Systems 19 (pp. 153--160). MIT Press.
[3]
Bengio, Y., & Le Cun, Y. (2007). Scaling learning algorithms towards AI. In L. Bottou, O. Chapelle, D. DeCoste and J. Weston (Eds.), Large scale kernel machines. MIT Press.
[4]
Bishop, C. M. (1995). Training with noise is equivalent to tikhonov regularization. Neural Computation, 7, 108--116.
[5]
Doi, E., Balcan, D. C., & Lewicki, M. S. (2006). A theoretical analysis of robust coding over noisy overcomplete channels. In Y. Weiss, B. Schöölkopf and J. Platt (Eds.), Advances in neural information processing systems 18, 307--314. Cambridge, MA: MIT Press.
[6]
Doi, E., & Lewicki, M. S. (2007). A theory of retinal population coding. NIPS (pp. 353--360). MIT Press.
[7]
Elad, M., & Aharon, M. (2006). Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing, 15, 3736--3745.
[8]
Gallinari, P., LeCun, Y., Thiria, S., & Fogelman-Soulie, F. (1987). Memoires associatives distribuees. Proceedings of COGNITIVA 87. Paris, La Villette.
[9]
Hammond, D., & Simoncelli, E. (2007). A machine learning framework for adaptive combination of signal denoising methods. 2007 International Conference on Image Processing (pp. VI: 29--32).
[10]
Hinton, G. (1989). Connectionist learning procedures. Artificial Intelligence, 40, 185--234.
[11]
Hinton, G., & Salakhutdinov, R. (2006). Reducing the dimensionality of data with neural networks. Science, 313, 504--507.
[12]
Hinton, G. E., Osindero, S., & Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527--1554.
[13]
Hopfield, J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, USA, 79.
[14]
Larochelle, H., Erhan, D., Courville, A., Bergstra, J., & Bengio, Y. (2007). An empirical evaluation of deep architectures on problems with many factors of variation. Proceedings of the 24th International Conference on Machine Learning (ICML'2007) (pp. 473--480).
[15]
LeCun, Y. (1987). Modèles connexionistes de l'apprentissage. Doctoral dissertation, Université de Paris VI.
[16]
Lee, H., Ekanadham, C., & Ng, A. (2008). Sparse deep belief net model for visual area V2. In J. Platt, D. Koller, Y. Singer and S. Roweis (Eds.), Advances in neural information processing systems 20. Cambridge, MA: MIT Press.
[17]
McClelland, J., Rumelhart, D., & the PDP Research Group (1986). Parallel distributed processing: Explorations in the microstructure of cognition, vol. 2. Cambridge: MIT Press.
[18]
Memisevic, R. (2007). Non-linear latent factor models for revealing structure in high-dimensional data. Doctoral dissertation, Departement of Computer Science, University of Toronto, Toronto, Ontario, Canada.
[19]
Ranzato, M., Boureau, Y.-L., & LeCun, Y. (2008). Sparse feature learning for deep belief networks. In J. Platt, D. Koller, Y. Singer and S. Roweis (Eds.), Advances in neural information processing systems 20. Cambridge, MA: MIT Press.
[20]
Ranzato, M., Poultney, C., Chopra, S., & LeCun, Y. (2007). Efficient learning of sparse representations with an energy-based model. Advances in Neural Information Processing Systems (NIPS 2006). MIT Press.
[21]
Roth, S., & Black, M. (2005). Fields of experts: a framework for learning image priors. IEEE Conference on Computer Vision and Pattern Recognition (pp. 860--867).
[22]
Utgoff, P., & Stracuzzi, D. (2002). Many-layered learning. Neural Computation, 14, 2497--2539.
[23]
Vincent, P., Larochelle, H., Bengio, Y., & Manzagol, P.-A. (2008). Extracting and composing robust features with denoising autoencoders (Technical Report 1316). Université de Montréal, dept. IRO.

Cited By

View all
  • (2025)Detection of Harmful Objects Using Deep Learning ModelsSCT Proceedings in Interdisciplinary Insights and Innovations10.56294/piii20255233(523)Online publication date: 6-Jan-2025
  • (2025)Enhancing Plant Leaf Classification with Deep Learning: Automating Feature Extraction for Accurate Species IdentificationSCT Proceedings in Interdisciplinary Insights and Innovations10.56294/piii20255133(513)Online publication date: 6-Jan-2025
  • (2025)Time-to-Fault Prediction Framework for Automated Manufacturing in Humanoid Robotics Using Deep LearningTechnologies10.3390/technologies1302004213:2(42)Online publication date: 21-Jan-2025
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICML '08: Proceedings of the 25th international conference on Machine learning
July 2008
1310 pages
ISBN:9781605582054
DOI:10.1145/1390156
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

  • Pascal
  • University of Helsinki
  • Xerox
  • Federation of Finnish Learned Societies
  • Google Inc.
  • NSF
  • Machine Learning Journal/Springer
  • Microsoft Research: Microsoft Research
  • Intel: Intel
  • Yahoo!
  • Helsinki Institute for Information Technology
  • IBM: IBM

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 July 2008

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article

Conference

ICML '08
Sponsor:
  • Microsoft Research
  • Intel
  • IBM

Acceptance Rates

Overall Acceptance Rate 140 of 548 submissions, 26%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,580
  • Downloads (Last 6 weeks)125
Reflects downloads up to 03 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Detection of Harmful Objects Using Deep Learning ModelsSCT Proceedings in Interdisciplinary Insights and Innovations10.56294/piii20255233(523)Online publication date: 6-Jan-2025
  • (2025)Enhancing Plant Leaf Classification with Deep Learning: Automating Feature Extraction for Accurate Species IdentificationSCT Proceedings in Interdisciplinary Insights and Innovations10.56294/piii20255133(513)Online publication date: 6-Jan-2025
  • (2025)Time-to-Fault Prediction Framework for Automated Manufacturing in Humanoid Robotics Using Deep LearningTechnologies10.3390/technologies1302004213:2(42)Online publication date: 21-Jan-2025
  • (2025)Image Segmentation Framework for Detecting Adversarial Attacks for Autonomous Driving CarsApplied Sciences10.3390/app1503132815:3(1328)Online publication date: 27-Jan-2025
  • (2025)LOW-LIGHT IMAGE ENHANCEMENT BASED ON DEPTHWISE SEPARABLE CONVOLUTIONUkrainian Journal of Physical Optics10.3116/16091833/Ukr.J.Phys.Opt.2025.0104026:1(01040-01063)Online publication date: 2025
  • (2025)Analog circuit fault diagnosis model based on WOA and improved SDAEIEICE Electronics Express10.1587/elex.21.2024063322:1(20240633-20240633)Online publication date: 10-Jan-2025
  • (2025)Open challenges and opportunities in federated foundation models towards biomedical healthcareBioData Mining10.1186/s13040-024-00414-918:1Online publication date: 4-Jan-2025
  • (2025)Robust and Ubiquitous Mobility Mode Estimation Using Limited Cellular InformationIEEE Transactions on Vehicular Technology10.1109/TVT.2024.345420874:1(1310-1321)Online publication date: Jan-2025
  • (2025)Evolved Hierarchical Masking for Self-Supervised LearningIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2024.349077647:2(1013-1027)Online publication date: Feb-2025
  • (2025)UniTE: A Survey and Unified Pipeline for Pre-Training Spatiotemporal Trajectory EmbeddingsIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.352399637:3(1475-1494)Online publication date: Mar-2025
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media