Abstract
Backwards calculation of derivatives – sometimes called the reverse mode, the full adjoint method, or backpropagation – has been developed and applied in many fields. This paper reviews several strands of history, advanced capabilities and types of application – particularly those which are crucial to the development of brain-like capabilities in intelligent control and artificial intelligence.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer
About this paper
Cite this paper
Werbos, P.J. (2006). Backwards Differentiation in AD and Neural Nets: Past Links and New Opportunities. In: Bücker, M., Corliss, G., Naumann, U., Hovland, P., Norris, B. (eds) Automatic Differentiation: Applications, Theory, and Implementations. Lecture Notes in Computational Science and Engineering, vol 50. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-28438-9_2
Download citation
DOI: https://doi.org/10.1007/3-540-28438-9_2
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28403-1
Online ISBN: 978-3-540-28438-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)