Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Improved Classification Based on Deep Belief Networks

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2020 (ICANN 2020)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12396))

Included in the following conference series:

  • 3404 Accesses

Abstract

For better classification, generative models are used to initialize the model and extract features before training a classifier. Typically, separate unsupervised and supervised learning problems are solved. Generative restricted Boltzmann machines and deep belief networks are widely used for unsupervised learning. We developed several supervised models based on deep belief networks in order to improve this two-phase strategy. Modifying the loss function to account for expectation with respect to the underlying generative model, introducing weight bounds, and multi-level programming are all applied in model development. The proposed models capture both unsupervised and supervised objectives effectively. The computational study verifies that our models perform better than the two-phase training approach. In addition, we conduct an ablation study to examine how a different part of our model and a different mix of training samples affect the performance of our models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    kdd.ics.uci.edu/databases/kddcup99/kddcup99.html.

  2. 2.

    archive.ics.uci.edu/ml/datasets/ISOLET.

  3. 3.

    archive.ics.uci.edu/ml/datasets/reuters-21578+text+categorization+collection.

References

  1. Larochelle, H., Mandel, M., Pascanu, R., Bengio, Y.: Learning algorithms for the classification restricted Boltzmann machine. J. Mach. Learn. Res. 13, 643–669 (2012)

    MathSciNet  MATH  Google Scholar 

  2. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  Google Scholar 

  3. Larochelle, H., Bengio, Y.: Classification using discriminative restricted Boltzmann machines. In: International Conference on Machine Learning (ICML), Helsinki, Finland, vol. 25, pp. 536–543 (2008)

    Google Scholar 

  4. Elfwing, S., Uchibe, E., Doya, K.: Expected energy-based restricted Boltzmann machine for classification. Neural Networks 64, 29–38 (2015)

    Article  Google Scholar 

  5. Dahl, G.E., Adams, R.P., Larochelle, H.: Training restricted Boltzmann machines on word observations. In: International Conference on Machine Learning (ICML), Edinburgh, Scotland, UK, vol. 29, pp. 679–686 (2012)

    Google Scholar 

  6. Norouzi, M., Ranjbar, M., Mori, G.: Stacks of convolutional restricted Boltzmann machines for shift-invariant feature learning. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR), Miami, FL, USA, pp. 2735–2742 (2009)

    Google Scholar 

  7. Salama, M.A., Eid, H.F., Ramadan, R.A., Darwish, A., Hassanien, A.E.: Hybrid intelligent intrusion detection scheme. Adv. Intell. Soft Comput. 96, 293–303 (2011)

    Article  Google Scholar 

  8. Xing, E.P., Yan, R., Hauptmann, A.G.: Mining associated text and images with dual-wing Harmoniums. In: Conference on Uncertainty in Artificial Intelligence (UAI), vol. 21, Edinburgh, Scotland, pp. 633–641 (2005)

    Google Scholar 

  9. Cho, K., Ilin, A., Raiko, T.: Improved learning algorithms for restricted Boltzmann machines. In: Artificial Neural Networks and Machine Learning (ICANN), vol. 21. Springer, Heidelberg (2011)

    Google Scholar 

  10. Mccallum, A., Pal, C., Druck, G., Wang, X.: Multi-conditional learning: generative/discriminative training for clustering and classification. In: National Conference on Artificial Intelligence (AAAI), vol. 21, pp. 433–439 (2006)

    Google Scholar 

  11. Gehler, P.V., Holub, A. D., Welling, M.: The rate adapting poisson (RAP) model for information retrieval and object recognition. In: International Conference on Machine Learning (ICML), Pittsburgh, PA, USA, vol. 23, pp. 337–344 (2006)

    Google Scholar 

  12. Bengio, Y., Lamblin, P.: Greedy layer-wise training of deep networks. In: Advances in Neural Information Processing Systems (NeurIPS), vol. 20, pp. 153–160. MIT Press (2007)

    Google Scholar 

  13. Sarikaya, R., Hinton, G.E., Deoras, A.: Application of deep belief networks for natural language understanding. IEEE/ACM Trans. Audio Speech Lang. Process. 22(4), 778–784 (2014)

    Article  Google Scholar 

  14. Lu, N., Li, T., Ren, X., Miao, H.: A deep learning scheme for motor imagery classification based on restricted Boltzmann machines. IEEE Trans. Neural Syst. Rehabil. Eng. 25, 566–576 (2017)

    Article  Google Scholar 

  15. Yu, L., Shi, X., Shengwei, T.: Classification of cytochrome P450 1A2 of inhibitors and noninhibitors based on deep belief network. Int. J. Comput. Intell. Appl. 16, 1750002 (2017)

    Article  Google Scholar 

  16. Li, Y., Nie, X., Huang, R.: Web spam classification method based on deep belief networks. Expert Syst. Appl. 96(1), 261–270 (2018)

    Article  Google Scholar 

  17. Hassan, M.M., Alam, G.R., Uddin, Z., Huda, S., Almogren, A., Fortino, G.: Human emotion recognition using deep belief network architecture. Inf. Fusion 51, 10–18 (2019)

    Article  Google Scholar 

  18. Schmah, T., Hinton, G.E., Zemel, R.S., Small, S.L., Strother, S.: Generative versus discriminative training of RBMs for classification of fMRI images. In: Advances in Neural Information Processing Systems (NeurIPS), vol. 21, pp. 1409–1416. Curran Associates Inc. (2009)

    Google Scholar 

  19. Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2(1), 1–127 (2009)

    Article  Google Scholar 

  20. Hinton, G.E.: Training products of experts by minimizing contrastive divergence. Neural Comput. 14(8), 1771–1800 (2002)

    Article  Google Scholar 

  21. Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–54 (2006)

    Article  MathSciNet  Google Scholar 

  22. Wang, B., Klabjan, D.: Regularization for unsupervised deep neural nets. In: National Conference on Artificial Intelligence (AAAI), vol. 31, pp. 2681–2687 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Diego Klabjan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Koo, J., Klabjan, D. (2020). Improved Classification Based on Deep Belief Networks. In: Farkaš, I., Masulli, P., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2020. ICANN 2020. Lecture Notes in Computer Science(), vol 12396. Springer, Cham. https://doi.org/10.1007/978-3-030-61609-0_43

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-61609-0_43

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-61608-3

  • Online ISBN: 978-3-030-61609-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics