Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Square Unit Augmented Radially Extended Multilayer Perceptrons

  • Chapter
  • First Online:
Neural Networks: Tricks of the Trade

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1524))

Abstract

Consider a multilayer perceptron (MLP) with d inputs, a single hidden sigmoidal layer and a linear output. By adding an additional d inputs to the network with values set to the square of the first d inputs, properties reminiscent of higher-order neural networks and radial basis function networks (RBFN) are added to the architecture with little added expense in terms of weight requirements. Of particular interest, this architecture has the ability to form localized features in a d-dimensional space with a single hidden node but can also span large volumes of the input space; thus, the architecture has the localized properties of an RBFN but does not su_er as badly from the curse of dimensionality. I refer to a network of this type as a SQuare Unit Augmented, Radially Extended, MultiLayer Perceptron (SQUARE-MLP or SMLP).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 74.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. M. Casdagli. Nonlinear prediction of chaotic time series. Physica D, 35:335–356, 1989.

    Article  MATH  MathSciNet  Google Scholar 

  2. D. H. Deterding. Speaker Normalisation for Automatic Speech Recognition. PhD thesis, University of Cambridge, 1989.

    Google Scholar 

  3. S.E. Fahlman. Faster-learning variations on back-propagation: An empirical study. In Proceedings of the 1988 Connectionist Models Summer School. Morgan Kaufmann, 1988.

    Google Scholar 

  4. S.E. Fahlman and C. Lebiere. The cascade-correlation learning architecture. In S. Touretzky, editor, Advances in Neural Information Processing Systems 2. Morgan Kaufmann, 1990.

    Google Scholar 

  5. M. Finke and K.-R. Müller. Estimating a-posteriori probabilities using stochastic network models. In M. Mozer, P. Smolensky, D.S. Touretzky, J.L. Elman, and A.S. Weigend, editors, Proceedings of the 1993 Connectionist Models summer school, pages 324–331, Hillsdale, NJ, 1994. Erlenbaum Associates.

    Google Scholar 

  6. T. Hastie and R. Tibshirani. Flexible discriminant analysis by optimal scoring. Technical report, AT&T Bell Labs, Murray Hill, New Jersey, 1993.

    Google Scholar 

  7. T. Hastie and R. Tibshirani. Discriminant adaptive nearest neighbor classifcation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(6):607–616, June 1996.

    Google Scholar 

  8. S. Hochreiter and J. Schmidhuber. Lococode. Technical Report FKI-222-97, Fakult Ät für Informatik, Technische UniversitÄt München, 1997.

    Google Scholar 

  9. K. J. Lang and M. J. Witbrock. Learning to tell two spirals apart. In Proceedings of the 1988 Connectionist Models Summer School. Morgan Kaufmann, 1988.

    Google Scholar 

  10. A. Lapedes and R. Farber. Nonlinear signal processing using neural networks: Prediction and system modelling. Technical Report LA-UR-87-2662, Los Alamos National Laboratory, Los Alamos, NM, 1987.

    Google Scholar 

  11. A. Lapedes and R. Farber. How neural nets work. In D.Z. Anderson, editor, Neural Information Processing Sysytems, pages 442–456. American Institute of Physics, New York, 1988.

    Google Scholar 

  12. S. Lawrence, A. C. Tsoi, and A. D. Back. Function approximation with neural networks and local methods: Bias, variance and smoothness. In Peter Bartlett, Anthony Burkitt, and Robert Williamson, editors, Australian Conference on Neural Networks, pages 16–21. Australian National University, 1996.

    Google Scholar 

  13. S. Lee and R.M. Kil. Multilayer feedforward potential function networks. In IEEE international Conference on Neural Networks, pages I:161–171. San Diego: SOS Printing, 1988.

    Google Scholar 

  14. Y.C. Lee, G. Doolen, H.H. Chen, G.Z. Sun, T. Maxwell, H.Y. Lee, and C.L. Giles. Machine learning using higher order correlation networks. Physica D, 22-D:276–306, 1986.

    MathSciNet  Google Scholar 

  15. J. Moody and C. Darken. Learning with localized receptivefields. In D. Touretsky, G. Hinton, and T. Sejnowski, editors, Proceedings of the 1988 Connectionist Models Summer School. Morgan-Kaufmann, 1988.

    Google Scholar 

  16. J. Moody and C. Darken. Fast learning in networks of locally-tuned processing units. Neural Computation, 1:281–294, 1989.

    Article  Google Scholar 

  17. M. Niranjan and F. Fallside. Neural networks and radial basis functions in classifying static speech patterns. Computer Speech and Language, 4(275–289), 1990.

    Google Scholar 

  18. Y.H. Pao. Adaptive Pattern Recognition and Neural Networks. Addison-Wesley Publishing Company, Inc., Reading, Massachusetts, 1989.

    MATH  Google Scholar 

  19. A. J. Robinson. Dynamic Error Propagation Networks. PhD thesis, Cambridge University, 1989.

    Google Scholar 

  20. D.E. Rumelhart, J.L. McClelland, and the PDP Research Group. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. The MIT Press., 1986. 2 vols.

    Google Scholar 

  21. W. Sarle. The comp.ai.neural-nets Frequently Asked Questions List, 1997.

    Google Scholar 

  22. M. Schetzen. The Volterra and Wiener Theories of Nonlinear Systems. John Wiley and Sons, New York, 1980.

    MATH  Google Scholar 

  23. B. Schölkopf, A. Smola, and K.-R. Müller. Nonlinear component analysis as a kernel eigenvalue problem. Technical report, Max-Planck-Institut für biologische Kybernetik, 1996. and Neural Computation, 10, 5, 1299–1319. 1998.

    Article  Google Scholar 

  24. V. Volterra. Theory of Functionals and of Integro-di_erential Equations. Dover, 1959.

    Google Scholar 

  25. P. Werbos. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. PhD thesis, Harvard University, 1974.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Flake, G.W. (1998). Square Unit Augmented Radially Extended Multilayer Perceptrons. In: Orr, G.B., Müller, KR. (eds) Neural Networks: Tricks of the Trade. Lecture Notes in Computer Science, vol 1524. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-49430-8_8

Download citation

  • DOI: https://doi.org/10.1007/3-540-49430-8_8

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65311-0

  • Online ISBN: 978-3-540-49430-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics