Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

Improving the approximation and convergence capabilities of projection pursuit learning

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

One nonparametric regression technique that has been successfully applied to high-dimensional data is projection pursuit regression (PPR). In this method, the regression surface is approximated by a sum of empirically determined univariate functions of linear combinations of the predictors. Projection pursuit learning (PPL) proposed by Hwanget al. formulates PPR using a two-layer feedforward neural network. One of the main differences between PPR and PPL is that the smoothers in PPR are nonparametric, whereas those in PPL are based on Hermite functions of some predefined highest orderR. While the convergence property of PPR is already known, that for PPL has not been thoroughly studied. In this paper, we demonstrate that PPL networks do not have the universal approximation and strong convergence properties for any finiteR. But, by including a bias term in each linear combination of the predictor variables, PPL networks can regain these capabilities, independent of the exact choice ofR. It is also shown experimentally that this modification improves the generalization performance in regression problems, and creates smoother decision surfaces for classification problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. S.E. Fahlman, C. Lebiere. The cascade-correlation learning architecture, in D.S. Touretzky, ed.,Advances in Neural Information Processing Systems 2, Morgan Kaufmann (Los Altos CA), pp. 524–532, 1990.

    Google Scholar 

  2. D.Y. Yeung. Constructive neural networks as estimators of Bayesian discriminant functions.Pattern Recognition, vol. 26, no. 1, pp. 189–204, 1993.

    Google Scholar 

  3. J. Moody, N. Yarvin. Networks with learned unit response functions, in J.E. Moody, S.J. Hanson, R.P. Lippmann eds,Advances in Neural Information Processing Systems 4, Morgan Kaufmann, pp. 1048–1055, 1992.

  4. Y. Zhao, C.G. Atkeson. Projection pursuit learning, inProceedings of the International Joint Conference on Neural Networks, vol. 1, Seattle, WA, USA, pp. 869–874, July 1991.

    Google Scholar 

  5. J.N. Hwang, S.R. Lay, M. Maechler, D. Martin, J. Schimert. Regression modeling in back-propagation and projection pursuit learning,IEEE Transactions on Neural Networks, vol. 5, no. 3, pp. 342–353, May 1994.

    Google Scholar 

  6. J.N. Hwang, S.S. You, S.R. Lay, I.C. Jou. What's wrong with a cascaded correlation learning network: A projection pursuit learning perspective. Submitted toIEEE Transactions on Neural Networks.

  7. J.H. Friedman, W. Stuetzle. Projection pursuit regression,Journal of the American Statistical Association, no. 76 (376), pp. 817–823, 1981.

    MathSciNet  Google Scholar 

  8. G. Sansone.Orthogonal functions. Dover, New York, 1991.

    Google Scholar 

  9. L.K. Jones. On a conjecture of Huber concerning the convergence of projection pursuit regression.The Annals of Statistics, vol. 15, no. 2, pp. 880–882, 1987.

    MATH  MathSciNet  ADS  Google Scholar 

  10. S.E. Fahlman. Faster learning variations on back-propagation: An empirical study, in D.S. Touretzky, G.E. Hinton, T.J. Sejnowski eds,Proceedings of the 1988 Connectionist Models Summer School, Los Altos, CA,. Morgan Kaufmann, pp. 38–51, 1988.

    Google Scholar 

  11. H. Akaike. A new look at the statistical model identification,IEEE Transactions on Automatic Control, vol. AC-19, no. 6, pp. 716–723, December 1974.

    MathSciNet  ADS  Google Scholar 

  12. K. Hornik. Some new results on neural network approximation,Neural Networks, vol. 6, pp. 1069–1072, 1993.

    Article  Google Scholar 

  13. K. Hornik. Approximation capabilities of multilayer feedforward networks,Neural Networks, vol. 4, pp. 251–257, 1991.

    Article  Google Scholar 

  14. L.K. Jones. A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training,The Annals of Statistics, vol. 20, no. 1, pp. 608–613, 1992.

    MATH  MathSciNet  ADS  Google Scholar 

  15. J.H. Friedman, E. Grosse, and W. Stuetzle. Multidimensional additive spline approximation.SIAM Journal of Scientific and Statistical Computing, vol. 4, no. 2, pp. 291–301, June 1983.

    MathSciNet  Google Scholar 

  16. J. Park, I. Sandberg, Universal approximation using radial-basis-function networks,Neural Computation, vol. 3, pp. 246–257, 1991.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kwok, TY., Yeung, DY. Improving the approximation and convergence capabilities of projection pursuit learning. Neural Process Lett 2, 20–25 (1995). https://doi.org/10.1007/BF02311575

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02311575

Keywords