Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Exactly Predictable Functions for Simple Neural Networks

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

A Publisher Correction to this article was published on 28 September 2023

This article has been updated

Abstract

We examine the degree of accuracy of simple feedforward neural nets with N inputs and a single output to forecast time series that represent analytical functions. We show that the subspace of those functions, whose higher order derivatives can be clustered into a finite number of linearly dependent groups, can be forecasted exactly by a neural net. Furthermore, we derive generally applicable summation and product rules that permit us to calculate the associated optimum connection weights for the particular network architecture for complicated but exactly predictable functions. If a general network is initialized with these particular weights, the learning process for general data (with noise) can be significantly accelerated and the forecasting accuracy increased. We also show that neural nets can be used to predict the finite value of diverging sums, which is a generic problem for most perturbation-based approaches to physical systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Change history

References

  1. Weigend AS, Gershenfeld NA. Time series prediction: forecasting the future and understanding the past. Santa Fe Institute Studies in the sciences of complexity. Reading: Addison-Wesley; 1993.

    Google Scholar 

  2. Hill T, O’Connor M, Remus W. Neural network models for time series forecasts. Manag Sci. 1996;42(7):1082–92.

    Article  MATH  Google Scholar 

  3. Goodfellow I, Bengio Y, Courville A. Deep learning. Cambridge: MIT Press; 2016.

    MATH  Google Scholar 

  4. James W. The principles of psychology. New York: Henry Holt and Company; 1890.

    Google Scholar 

  5. McCulloch W, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys. 1943;5:115–33.

    Article  MathSciNet  MATH  Google Scholar 

  6. Hebb D. The organization of behavior. New York: Wiley; 1949.

    Google Scholar 

  7. Rosenblatt F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev. 1958;65(6):386–408.

    Article  Google Scholar 

  8. Minsky M, Papert S. Perceptrons. 1st ed. Cambridge: MIT Press; 1969. (2nd edition 1972).

    MATH  Google Scholar 

  9. Rummelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature. 1986;323:533–6.

    Article  MATH  Google Scholar 

  10. Rojas R. The backpropagation algorithm neural networks: a systematic introduction. Berlin: Springer; 1996.

    Book  MATH  Google Scholar 

  11. Mills TC. Time series techniques for economists. Cambridge: Cambridge University Press; 1990.

    MATH  Google Scholar 

  12. Percival DB, Walden AT. Spectral analysis for physical applications. Cambridge: Cambridge University Press; 1993.

    Book  MATH  Google Scholar 

  13. Hamilton J. Time series analysis. Princeton: Princeton University Press; 1994.

    Book  MATH  Google Scholar 

  14. Papoulis A. Probability, random variables, and stochastic processes. New York: Tata McGraw-Hill Education; 2002.

    MATH  Google Scholar 

  15. Box GEP. Time series analysis: forecasting and control. Hoboken: Wiley; 2015.

    Google Scholar 

  16. Hamilton JD. Time series analysis. Princeton: Princeton University Press; 1994.

    Book  MATH  Google Scholar 

  17. Hornik K, Stinchcombe M, White H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989;2:359–66.

    Article  MATH  Google Scholar 

  18. Hornik K, Stinchcombe M, White H. Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural Netw. 1990;3(5):551–60.

    Article  Google Scholar 

  19. Leshno M, Lin VY, Pinkus A, Schocken S. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 1993;6(6):861–7.

    Article  Google Scholar 

  20. Barron AE. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans Inf Theory. 1993;39:930–45.

    Article  MathSciNet  MATH  Google Scholar 

  21. For a nice visual proof of the universality theorem, see chapter 4 in Nielson M. Neural Networks and Deep Learning. http://neuralnetworksanddeeplearning.com/

  22. Oppenheim AV, Schafer RW. Discrete-time signal processing. Englewood Cliffs: Prentice Hall; 1989. Accessed 12 Sept 2020

    MATH  Google Scholar 

  23. Kamen EW. Introduction to signals and systems. New York: McMillan; 1990.

    Google Scholar 

  24. Gershenfeld NA. The nature of mathematical modeling. Cambridge: Cambridge Press; 1999.

    MATH  Google Scholar 

  25. Glorot X, Bengio Y, see page 249 in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS), Italy. 2010;9 of JMLR: W & CP 9.

  26. Kingma DP, Adam BJ. A method for stochastic optimization. 2014; arXiv:1412.6980v9.

  27. Zhang Z, Ma L, Li Z, Wu C. Normalized direction-preserving adam. 2017; arXiv:1709.04546v2.

  28. Reddi SJ, Kale S, Kumar S. On the convergence of adam and beyond. 2018; arXiv:1904.09237.

  29. Loshchilov I, Hutter F. Fixing weight decay regularization in adam. 2017; arXiv:1711.05101v2.

  30. Keskar NS, Socher R. Improving generalization performance by switching from Adam to SGD. 2017. arXiv:1712.07628v1.

  31. Chollet F et. al. Keras. 2015. https://keras.io.

  32. Borel E. Memoire sur les Series Divergentes. Ann Sci Econ Norm Super Ser. 1899;3(16):9–131.

    MATH  Google Scholar 

  33. Hardy GH. Divergent series. New York: AMS Chelsea; 1949.

    MATH  Google Scholar 

  34. Lisowski C, Norris S, Pelphrey R, Stefanovich E, Su Q, Grobe R. Ground state energies from converging and diverging power series expansions. Ann Phys. 2016;373:456–69.

    Article  MathSciNet  MATH  Google Scholar 

  35. Lv QZ, Norris S, Pelphrey R, Su Q, Grobe R. Computation of diverging sums based on a finite number of terms. Comput Phys Comm. 2017;219:1–12.

    Article  MathSciNet  MATH  Google Scholar 

  36. Schmidt M, Lipson H. Distilling free-form natural laws from experimental data. Science. 2009;324:81–5.

    Article  Google Scholar 

  37. Bongard J, Lipson H. Automated reverse engineering of nonlinear dynamical systems. Proc Natl Acad Sci USA. 2007;104(24):9943–8.

    Article  MATH  Google Scholar 

Download references

Acknowledgements

We appreciate Prof. N. Christensen’s enthusiasm for this work at its early stages and numerous illuminating discussions. We also thank Prof. R.F. Martin and C. Gong for very helpful discussions and pointing out recent related work in the literature. This work has been supported by the US National Science Foundation and Research Corporation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xing Fang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Proof of the Superposition Laws for Sums

Here we briefly outline the basic ideas for the general proof of the superposition law that permits us to construct the perfect connection weights \(W_{k}(F)\) for \(F(t)\equiv f(t) + g(t)\) in terms of the original weights of f(t), denoted by \(W_{n}(f)\), and of g(t), denoted by \(W_{m}(f)\). We require

$$\begin{aligned} f(t)+g(t)=\; & {} \sum _{k=1}^{N+M}W_{k}(f+g)\left[ f(t-kh)+g(t-kh)\right] \end{aligned}$$
(10)
$$\begin{aligned} f(t)=\; & {} \sum _{n=1}^{N}W_{n}(f) \ f(t-nh) \end{aligned}$$
(11)
$$\begin{aligned} g(t)=\; & {} \sum _{m=1}^{M}W_{m}(g) \ g(t-mh). \end{aligned}$$
(12)

The proof in full generality is very clumsy and can be best performed with some computer algebra software packages such as Mathematica or MatLab. The basic idea is to replace the \((M+1)\) functions at the times f(t), \(f(t-h)\) up to \(f(t - M h)\) in terms of the N functions at earlier times \(f(t-j h)\) with \(j = 1+M, N+M\). This iterative procedure that needs to be done in a strict consecutive order is extremely cumbersome. For example, by evaluating both sides of Eq. (11) for the argument \(t-M h\), we have

$$\begin{aligned} f(t-Mh)=\sum _{n=1}^{N}W_{n}(f) \ f\left( t-(M+n)h\right) . \end{aligned}$$
(13)

Similarly, for the argument \(t-(M-1)h\) and insertion of Eq. (13) we obtain

$$\begin{aligned}&f(t-(M-1)h) =\sum _{n=1}^{N}W_{n}(f) \ f\left( t-(M-1+n)h\right) \nonumber \\&\quad =W_{1}(f) \ f(t-Mh) + \sum _{n=2}^{N}W_{n}(f) \ f\left( t-(M-1+n)h\right) \nonumber \\&\quad =W_{1}(f) \sum _{n=1}^{N}W_{n}(f) \ f\left( t-(M+n)h\right) \nonumber \\&\quad + \sum _{n=2}^{N}W_{n}(f) \ f\left( t-(M-1+n)h\right) . \end{aligned}$$
(14)

This sequence of iterative steps needs to be repeated \((M+1)\) times until the function f(t) can be expressed in terms of \(f(t-j h)\) with \(j = M+1, N+M\) and all \(W_{n}(f)\). The same replacements need to be performed for the function g(t) as well. Here the \((N+1)\) functions at the times g(t), \(g(t-h)\) up to \(g(t-N h)\) need to be expressed in terms of the N functions at earlier times \(g(t-j h)\) with \(j=N+1, N+M\).

After these expressions are inserted into Eq. (10), this equation becomes finally a single linear equation for the unknown \((N+M)\) weights \(W_{k}(f+g)\), containing all N weights \(W_{n}(f)\) and M weights \(W_{m}(f)\) as well as the N functions \(f(t-j h)\) with \(j = 1+M, N+M\) and M functions \(g(t-jh)\) with \(j = N+1, N+M\).

As this single equation needs to be satisfied for all times t, the \((N+M)\) pre-factors in front of all \(f(t-j h)\) and \(g(t-j h)\) need to vanish identically. The corresponding set of \((N+M)\) equations for the \((N+M)\) weights \(W_{k}(f+g)\) can be solved uniquely.

To give the reader, a better idea of the complexity of the derivation, we present here a concrete example, where we choose \(N = 3\) and \(M = 2\). For example, \(f(t) = t^{2} \exp (3 t)\) and \(g(t) = \cos (5t)\) would fall in this category with the known weights according to Table 1.

$$\begin{aligned} f(t)+g(t)=\; & {} \sum _{k=1}^{5}W_{k}\left[ f(t-kh)+g(t-kh)\right] \end{aligned}$$
(15)
$$\begin{aligned} f(t)=\; & {} U_{1} \ f(t-h)+U_{2} f(t-2h)+U_{3} \ f(t-3h) \end{aligned}$$
(16)
$$\begin{aligned} g(t)=\; & {} V_{1} \ g(t-h)+V_{2} \ g(t-2h), \end{aligned}$$
(17)

where for notational simplicity we abbreviate \(U_{n}\equiv W_{n}(f)\) and \(V_{m}\equiv W_{m}(g)\). Using Eqs. (16) and  (17) repeatedly, the sequence of required replacements leads to

$$\begin{aligned} f(t)=\; & {} \left[ U_{1}^{3}+2U_{1}U_{2}+U_{3}\right] f(t-3h)\nonumber \\&+\left[ U_{2}\left( U_{1}^{2}+U_{2}\right) +U_{1} U_{3}\right] f(t-4h) \nonumber \\+ & {} \left( U_{1}^{2}+U_{2}\right) U_{3}f(t-5h) \nonumber \\ f(t-h)=\; & {} \left[ U_{1}^{2}+U_{2}\right] f(t-3h)+\left( U_{1}U_{2}\right. \nonumber \\&\left. +U_{3}\right) f(t-4h)+U_{1}U_{3}f(t-5h) \nonumber \\ f(t-2h)=\; & {} U_{1}f(t-3h)+U_{2}f(t-4h)+U_{3}f(t-5h) \nonumber \\ g(t)=\; & {} \left( V_{1}^{4}+3V_{1}^{2}V_{2}+V_{2}^{2}\right) \ g(t-4h)\nonumber \\&+V_{1}V_{2}\left( V_{1}^{2}+2V_{2}\right) \ g(t-5h) \nonumber \\ g(t-h)=\; & {} \left( V_{1}^{3}V_{2}+2V_{1}V_{2}\right) \ g(t-4h)\nonumber \\&+V_{2}\left( V_{1}^{2}+V_{2}\right) \ g(t-5h) \nonumber \\ g(t-2h)=\; & {} \left( V_{1}^{2}+V_{2}\right) \ g(t-4h)+V_{1}V_{2}\ g(t-5h) \nonumber \\ g(t-3h)=\; & {} V_{1}\ g(t-4h)+V_{2}\ g(t-5h). \end{aligned}$$
(18)

As a result, we obtain for Eq. (15)

$$\begin{aligned} 0=\; & {} -f(t)-g(t)+ \sum _{k=1}^{5} W_{k}\left[ f(t-k h) + g(t-k h\right] \nonumber \\= \;& {} A_{1}f(t-3h)+A_{2}f(t-4h)\nonumber \\&+A_{3}f(t-5h)+A_{4}\ g(t-4h)+A_{5}\ g(t-5h), \end{aligned}$$
(19)

where the five coefficients are given by

$$\begin{aligned} A_{1}=\; & {} -U_{1}^{3}-U_{3}-2U_{1}U_{2}+\left( U_{1}^{2}+U_{2}\right) W_{1}+U_{1}W_{2}+W_{3} \nonumber \\ A_{2}=\; & {} -U_{2}^{2}-U_{1}\left( U_{1}U_{2}+U_{3}\right) \nonumber \\&+\left( U_{1}U_{2}+U_{3}\right) W_{1}+U_{2}W_{2}+W_{4} \nonumber \\ A_{3}=\; & {} -U_{2}U_{3}-U_{1}U_{1}U_{3}+U_{3}U_{1}W_{1}+U_{3}W_{2}+W_{5} \nonumber \\ A_{4}=\; & {} -V_{1}^{4}-3V_{1}^{2}V_{2}-V_{2}^{2}+(V_{1}^{3}\nonumber \\&+2V_{1}V_{2})W_{1}+\left( V_{1}^{2}+V_{2}\right) W_{2}+V_{1}W_{3}+W_{4} \nonumber \\ A_{5}=\; & {} -V_{1}^{3}V_{2}-2V_{1}V_{2}^{2}+\left( V_{1}^{2}V_{2}\right. \nonumber \\&\left. +V_{2}^{2}\right) W_{1}+V_{1}V_{2}W_{2}+V_{2}W_{3}+W_{5}. \end{aligned}$$
(20)

If we equate these five coefficients \(A_{k}\) to zero, we obtain the final solutions for the weights W as

$$\begin{aligned} W_{1}= \;& {} U_{1}+V_{1} \nonumber \\ W_{2}= \;& {} U_{2}+V_{2}-U_{1}V_{1} \nonumber \\ W_{3}=\; & {} U_{3}-U_{1}V_{2}-U_{2}V_{1} \nonumber \\ W_{4}=\; & {} -U_{2}V_{2}-U_{3}V_{1} \nonumber \\ W_{5}=\; & {} -U_{3}V_{2} \end{aligned}$$
(21)

In view of the complexity of the expressions in the intermediate steps, these forms are remarkably simple and they are in full agreement with the general solutions of Eq. (7) for arbitrary N and M.

Appendix B: Weight Factors for the Product Rule

Here we briefly outline the basic ideas for the general proof of the superposition law that permits us to construct the perfect connection weights \(W_{k}(F)\) for products \(F(t)\equiv f(t)g(t)\) in terms of the original weights of f(t), denoted by \(W_{n}(f)\), and of g(t), denoted by \(W_{m}(f)\). We require

$$\begin{aligned} f(t)\ g(t)=\; & {} \sum _{k=1}^{NM}W_{k}(fg) f(t-kh)\ g(t-kh) \end{aligned}$$
(22)
$$\begin{aligned} f(t)=\; & {} \sum _{n=1}^{N}W_{n}(f) \ f(t-nh) \end{aligned}$$
(23)
$$\begin{aligned} g(t)=\; & {} \sum _{m=1}^{M}W_{m}(g) \ f(t-mh). \end{aligned}$$
(24)

The approach to derive of the weights \(W_{k}(fg)\) from the \(W_{k}(f)\) and \(W_{k}(g)\) is—in principle—similar to the one used in appendix A, but it is significantly more complicated and we illustrate it here only for the \(N=3\) and \(M=3\) case. Here we would iteratively use Eq. (23) to replace the functions f and g at later times in terms of the six values \(f(t-kh)\) and \(g(t-kh)\) for \(k = 7, 8\) and 9. After the replacements, the central equation Eq. (23) for the nine weights \(W_{k}(fg)\) depend on the nine time-dependent product functions \(f(t-k_{1}h)g(t-k_{2}h)\) with \(k_{1} = 7, 8, 9\) and \(k_{2} = 7, 8, 9\). If we assume that these nine functions are linearly independent of each other, we have to require that the corresponding nine pre-factors vanish. If we solve the resulting nine coupled but linear equations of the nine weights \(W_{k}(fg)\) for \(k=1, 2, \ldots , 9\) we finally obtain the solutions

$$\begin{aligned} W_{1}(fg)=\; & {} U_{1}V_{1} \end{aligned}$$
(25)
$$\begin{aligned} W_{2}(fg)=\; & {} U_{1}^{2}V_{2}+U_{2}V_{1}^{2}+2U_{2}V_{2} \end{aligned}$$
(26)
$$\begin{aligned} W_{3}(fg)=\; & {} U_{3}\left( V_{1}^{3}+3V_{1}V_{2}+3V_{3}\right) \nonumber \\&+U_{1}\left( U_{2}V_{1}V_{2}+U_{1}^{2}V_{3}+3U_{2}V_{3}\right) \end{aligned}$$
(27)
$$\begin{aligned} W_{4}(fg)=\; & {} U_{1}^{2}U_{2}V_{1}V_{3}-U_{2}^{2}\left( V_{2}^{2}-2V_{1}V_{3}\right) \nonumber \\&+U_{1}U_{3}\left[ V_{2}\left( V_{1}^{2}+2V_{2}\right) +V_{1}V_{3}\right] \end{aligned}$$
(28)
$$\begin{aligned} W_{5}(fg)=\; & {} -U_{1}U_{2}^{2}V_{2}V_{3}+U_{1}^{2}U_{3}\left( V_{1}^{2}\right. \nonumber \\&\left. +2V_{2}\right) V_{3}+U_{2}U_{3}\left( -V_{1}V_{2}^{2}+2V_{1}^{2}V_{3}-V_{2}V_{3}\right) \end{aligned}$$
(29)
$$\begin{aligned} W_{6}(fg)=\; & {} U_{2}^{3}V_{3}^{2}-U_{1}U_{2}U_{3}V_{3}\left( V_{1}V_{2}+3V_{3}\right) \nonumber \\&+U_{3}^{2}\left( V_{2}^{3}-3V_{1}V_{2}V_{3}-3V_{3}^{2}\right) \end{aligned}$$
(30)
$$\begin{aligned} W_{7}(fg)=\; & {} U_{3}V_{3}\left[ U_{2}^{2}V_{1}V_{3}+U_{1}U_{3}\left( V_{2}^{2}-2V_{1}V_{3}\right) \right] \end{aligned}$$
(31)
$$\begin{aligned} W_{8}(fg)=\; & {} -U_{2}U_{3}^{2}V_{2}V_{3}^{2} \end{aligned}$$
(32)
$$\begin{aligned} W_{9}(fg)=\; & {} U_{3}^{3}V_{3}^{3}. \end{aligned}$$
(33)

For notational simplicity, we have used again the abbreviations \(U_{k}\equiv W_{k}(f)\) and \(V_{k}\equiv W_{k}(g)\). Unfortunately, we have not been able to recognize a certain regular pattern to these 9 weights that would have allowed us to predict the corresponding 16 weights for the \(N=4\) \(M=4\) system. Even though we note that the sum of the indices of each factor U and V matches the index k of \(W_{k}(fg)\), respectively, to predict reliably the corresponding permutations of these factors, their pre-factors and signs seems difficult.

Appendix C: Optimum Weights Involving a Sum of Products

Here we examine the optimum weights due for the specific function f(t)

$$\begin{aligned} f(t)=3\ \exp (a_{1}t)\ \cos (b_{1}t)+\exp (a_{2}t)\cos (b_{2}t), \end{aligned}$$
(34)

where we have used the specific values \(a_{1}=-2\) and \(a_{2}=2\) for the decay and growth rates and \(b_{1} = 70\) and \(b_{2} =20\) for the two frequencies. As a sum of the class = 2 functions \(\exp (a_{1}t) \cos (b_{1}t)\) and \(\exp (-a_{2}t) \cos (b_{2}t)\), the function f(t) is again a epf of class = 4. Applying consecutively the superposition laws derived in this work for optimal weights for the summation and products of functions (Eqs. 725 and  26), one can derive the following analytical expressions for the six optimal weights.

$$\begin{aligned} W_{1}=\; & {} 2\exp (a_{1}h)\cos (b_{1}h)\nonumber \\&+2\exp (a_{2}h)\cos (b_{2}h) \end{aligned}$$
(35)
$$\begin{aligned} W_{2}=\; & {} -\exp (2a_{1}h)-\exp (2a_{2}h)\nonumber \\&-4\exp (a_{1}h+a_{2}h)\cos (b_{1}h)\cos (b_{2}h) \end{aligned}$$
(36)
$$\begin{aligned} W_{3}=\; & {} 2\exp (a_{1}h+2a_{2}h)\cos (b_{1}h)\nonumber \\&+2\exp (2a_{1}h+a_{2}h)\cos (b_{2}h) \end{aligned}$$
(37)
$$\begin{aligned} W_{4}=\; & {} -\exp (2a_{1}h+2a_{2}h). \end{aligned}$$
(38)
Fig. 6
figure 6

The dependence of the optimal four weights for the test function Eq. (34) for \(a_{1}= -2\), \(a_{2} = 2\), \(b_{1} = 70\) and \(b_{2} = 20\) as a function of the grid spacing h

In Fig. 6 we have graphed these four optimum weights as a function of the grid spacing h. In the limit of small spacings \(h\rightarrow 0\) we find that the weights approach the values \((W_{1},W_{2},W_{3},W_{4}) =(4, -6, 4, -1)\equiv {\mathbf {W}}(h=0)\). This set corresponds precisely the optimal (h-independent) weights \(B_{n4}\) for any polynomial of degree 3. This is not a coincidence as the original optimal weights of the constituent functions \(\exp (at)\), \(\cos (bt)\) of f(t) of Eq. (34) converge already in this limit to the alternating binomial coefficients given in Eq. (8).

The key question is, whether the binomial coefficients can act as helpful initial values for the learning algorithm for the relevant case where \(h\ne 0\). For example, for \(h<0.0076\), each the optimal set of weights differs from the set \({\mathbf {W}}(h=0)\) by at most \(10\%\). For example, for \(h=0.01\) we have the set of exact optimal weights given by \({\mathbf {W}}(0.01) =(3.50, -5.00, 3.48, -1.00)\), very similar to \(B_{n4}=(-1)^{n+1}(4,n)\). This suggests, that as long as the grid spacing is not too large, the binomial set should be an ideal set of weight parameters to initialize the net for the learning process.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yost, J., Rizo, L., Fang, X. et al. Exactly Predictable Functions for Simple Neural Networks. SN COMPUT. SCI. 3, 64 (2022). https://doi.org/10.1007/s42979-021-00949-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-021-00949-2

Keywords