Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Comparing Regularization Techniques Applied to a Perceptron

  • Conference paper
  • First Online:
Recent Developments in Mathematical, Statistical and Computational Sciences (AMMCS 2019)

Part of the book series: Springer Proceedings in Mathematics & Statistics ((PROMS,volume 343))

  • 1022 Accesses

Abstract

Overfitting is a common problem that is faced when dealing with neural networks, especially as computers continue to get more powerful, and we have the capability to train larger networks with many free parameters. As a result there is a pressing need to develop and explore different techniques to reduce overfitting; we explore the impact of different regularization terms, and their combinations, in the training phase of a single-perceptron neural network.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. McCulloch, W.S., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943)

    Article  MathSciNet  Google Scholar 

  2. Rosenblatt, F.: Principles of neurodynamics: perceptrons and the theory of brain mechanisms. In: Palm, G., Aertsen, A. (eds.) Brain Theory. Springer, Berlin, Heidelberg (1962)

    MATH  Google Scholar 

  3. LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Back-propagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)

    Google Scholar 

  4. Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning requires rethinking generalization. In: International Conference on Learning Representations, ICLR 2017 (2017)

    Google Scholar 

  5. Louizos, C., Welling, M., Kingma, D.P.: Learning sparse neural networks through \(L_{0}\) regularization. In: International Conference on Learning Representations, ICLR 2018 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bryson Boreland .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Boreland, B., Kunze, H., Levere, K.M. (2021). Comparing Regularization Techniques Applied to a Perceptron. In: Kilgour, D.M., Kunze, H., Makarov, R., Melnik, R., Wang, X. (eds) Recent Developments in Mathematical, Statistical and Computational Sciences. AMMCS 2019. Springer Proceedings in Mathematics & Statistics, vol 343. Springer, Cham. https://doi.org/10.1007/978-3-030-63591-6_12

Download citation

Publish with us

Policies and ethics