Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

Continuous restricted Boltzmann machines

  • Published:
Wireless Networks Aims and scope Submit manuscript

Abstract

Restricted Boltzmann machines are a generative neural network. They summarize their input data to build a probabilistic model that can then be used to reconstruct missing data or to classify new data. Unlike discrete Boltzmann machines, where the data are mapped to the space of integers or bitstrings, continuous Boltzmann machines directly use floating point numbers and therefore represent the data with higher fidelity. The primary limitation in using Boltzmann machines for big-data problems is the efficiency of the training algorithm. This paper describes an efficient deterministic algorithm for training continuous machines.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  1. Abu-halaweh, N. M., & Harrison, R. W. (2009). Practical fuzzy decision trees. In 2009 IEEE symposium on computational intelligence and data mining (pp. 211–216).

  2. Abu-halaweh, N. M., & Harrison, R. W. (2010). Identifying essential features for the classification of real and pseudo microRNAs precursors using fuzzy decision trees. In 2010 IEEE symposium on computational intelligence in bioinformatics and computational biology (pp. 1–7).

  3. Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798–1828.

    Article  Google Scholar 

  4. Chen, H., & Murray, A. F. (2003). Continuous restricted boltzmann machine with an implementable training algorithm. IEE Proceedings—Vision, Image and Signal Processing, 150(3), 153–158.

    Article  Google Scholar 

  5. Durham, E. E. A., Yu, X., & Harrison, R. W. (2014). FDT 2.0: Improving scalability of the fuzzy decision tree induction tool—integrating database storage. In 2014 IEEE symposium on computational intelligence in healthcare and e-health (CICARE) (pp. 187–190).

  6. Fischer, A., & Igel, C. (2014). Training restricted boltzmann machines: An introduction. Pattern Recognition, 47(1), 25–39.

    Article  Google Scholar 

  7. Harrison, R., McDermott, M., & Umoja, C. (2017). Recognizing protein secondary structures with neural networks. In 28th international workshop on database and expert systems applications (DEXA) 2017 . (pp. 62–68).

  8. Harrison, R. W., & Freas, C. (2017). Fuzzy restricted boltzmann machines. In North American fuzzy information processing society annual conference. (pp. 392–398). Cham: Springer.

  9. Hebb, D. O. (1949). Organization of behavior. New York: Wiley.

    Google Scholar 

  10. Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8), 1771–1800.

    Article  Google Scholar 

  11. Hinton, G. (2010). UTML TR 2010003 A Practical Guide to Training Restricted Boltzmann Machines. http://www.cs.toronto.edu/~hinton/absps/guideTR.pdf.

  12. Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8), 1771–1800.

    Article  Google Scholar 

  13. LeCunn, Y., Cortes, C., & Burges, C. J. C. (2017). http://yann.lecun.com/exdb/mnist/.

  14. Lera, G., & Pinzolas, M. (2002). Neighborhood based levenberg-marquardt algorithm for neural network training. IEEE Transactions on Neural Networks, 13(5), 1200–1203.

    Article  Google Scholar 

  15. Lichman, M. (2013). UCI machine learning repository. http://archive.ics.uci.edu/ml.

  16. Marquardt, D. (1963). An algorithm for least-squares estimation of nonlinear parameters. Journal of the Society for Industrial and Applied Mathematics, 11(2), 431–441. https://doi.org/10.1137/0111030.

    Article  MathSciNet  MATH  Google Scholar 

  17. McMillen, T., Simen, P., & Behseta, S. (2011). Hebbian learning in linear nonlinear networks with tuning curves leads to near-optimal, multi-alternative decision making. Neural Networks, 24(5), 417–426.

    Article  Google Scholar 

  18. Salakhutdinov, R., & Hinton, G. E. (2012). An efficient learning procedure for deep Boltzmann machines. Neural Computation, 24(8), 1967–2006.

    Article  MathSciNet  Google Scholar 

  19. Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.

    Article  Google Scholar 

  20. Yu, X., Weber, I., & Harrison, R. (2013). Sparse representation for HIV-1 protease drug resistance prediction. In Proceedings of the 2013 SIAM international conference on data mining. (pp. 342–349). Philadelphia: SIAM.

Download references

Acknowledgements

The research was supported in part by the National Institutes of Health Grant GM062920.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert W. Harrison.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Harrison, R.W. Continuous restricted Boltzmann machines. Wireless Netw 28, 1263–1267 (2022). https://doi.org/10.1007/s11276-018-01903-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11276-018-01903-6

Keywords