Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Jump to content

Hidden layer

From Wikipedia, the free encyclopedia
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
Example of hidden layers in a MLP.

In artificial neural networks, a hidden layer is a layer of artificial neurons that is neither an input layer nor an output layer. The simplest examples appear in multilayer perceptrons (MLP), as illustrated in the diagram.[1]

An MLP without any hidden layer is essentially just a linear model. With hidden layers and activation functions, however, nonlinearity is introduced into the model.[1]

In typical machine learning practice, the weights and biases are initialized, then iteratively updated during training via backpropagation.[1]

References

  1. ^ a b c Zhang, Aston; Lipton, Zachary; Li, Mu; Smola, Alexander J. (2024). "5.1. Multilayer Perceptrons". Dive into deep learning. Cambridge New York Port Melbourne New Delhi Singapore: Cambridge University Press. ISBN 978-1-009-38943-3.