Abstract
The answer to the question “Why neural networks work so well in practice?” is certainly based on the fact that neural networks can approximate well a large family of real-life functions that depend on input variables. The goal of this chapter is to provide mathematical proofs of this behavior for different variants of targets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Calin, O. (2020). Universal Approximators. In: Deep Learning Architectures. Springer Series in the Data Sciences. Springer, Cham. https://doi.org/10.1007/978-3-030-36721-3_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-36721-3_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-36720-6
Online ISBN: 978-3-030-36721-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)