Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Analysis of Support Vector Machines Regression

  • Published:
Foundations of Computational Mathematics Aims and scope Submit manuscript

Abstract

Support vector machines regression (SVMR) is a regularized learning algorithm in reproducing kernel Hilbert spaces with a loss function called the ε-insensitive loss function. Compared with the well-understood least square regression, the study of SVMR is not satisfactory, especially the quantitative estimates of the convergence of this algorithm. This paper provides an error analysis for SVMR, and introduces some recently developed methods for analysis of classification algorithms such as the projection operator and the iteration technique. The main result is an explicit learning rate for the SVMR algorithm under some assumptions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. N. Aronszajn, Theory of reproducing kernels, Trans. Am. Math. Soc. 68, 337–404 (1950).

    Article  MATH  MathSciNet  Google Scholar 

  2. P.L. Bartlett, The sample complexity of pattern classification with neural networks: The size of the weights is more import than the size of the network, IEEE Trans. Inf. Theory 44, 525–536 (1998).

    Article  MATH  MathSciNet  Google Scholar 

  3. O. Bousquet, A. Elisseeff, Stability and generalization, J. Mach. Learn. Res. 2, 499–526 (2002).

    Article  MATH  MathSciNet  Google Scholar 

  4. D.R. Chen, Q. Wu, Y. Ying, D.X. Zhou, Support vector machine soft margin classifiers: error analysis, J. Mach. Learn. Res. 5, 1143–1175 (2004).

    MathSciNet  Google Scholar 

  5. A. Christmann, I. Steinwart, Consistency and robustness of kernel-based regression in convex risk minimization, Bernoulli 13, 799–819 (2007).

    Article  MATH  MathSciNet  Google Scholar 

  6. F. Cucker, S. Smale, On the mathematical foundations of learning theory, Bull. Am. Math. Soc. 39, 1–49 (2001).

    Article  MathSciNet  Google Scholar 

  7. F. Cucker, S. Smale, Best choices for regularization parameters in learning theory: On the bias-variance problem, Found. Comput. Math. 2, 413–428 (2002).

    Article  MATH  MathSciNet  Google Scholar 

  8. E. De Vito, A. Caponnetto, L. Rosasco, Model selection for regularized least-squares algorithm in learning theory, Found. Comput. Math. 5, 59–85 (2005).

    Article  MATH  MathSciNet  Google Scholar 

  9. L. Devroye, L. Györfi, G. Lugosi, A Probabilistic Theory of Pattern Recognition (Springer, New York, 1997).

    Google Scholar 

  10. T. Evgeniou, M. Pontil, T. Poggio, Regularization networks and support vector machines, Adv. Comput. Math. 13, 1–50 (2000).

    Article  MATH  MathSciNet  Google Scholar 

  11. P.J. Huber, Robust Statistics (Wiley, New York, 1981).

    Book  MATH  Google Scholar 

  12. T. Poggio, S. Smale, The mathematics of learning: Deal with data, Not. Am. Math. Soc. 50, 537–544 (2003).

    MATH  MathSciNet  Google Scholar 

  13. M. Pontil, S. Mukherjee, F. Girosi, On the noise model of support vector machine regression, A.I. Memo 1651, MIT Artificial Intelligence Lab., 1998.

  14. L. Rosasco, E. De Vito, A. Caponnetto, M. Piana, A. Verri, Are loss functions all the same? Neural Comput. 16, 1063–1076 (2004).

    Article  MATH  Google Scholar 

  15. B. Scholkopf, A.J. Smola, Learning with Kernel (MIT Press, Cambridge, 2002).

    Google Scholar 

  16. S. Smale, D.X. Zhou, Shannon sampling II. Connections to learning theory, Appl. Comput. Harmon. Anal. 19, 285–302 (2005).

    Article  MATH  MathSciNet  Google Scholar 

  17. S. Smale, D.X. Zhou, Learning theory estimates via integral operators and their applications, Constr. Approx. 26, 153–172 (2007).

    Article  MATH  MathSciNet  Google Scholar 

  18. C. Scovel, I. Steinwart, Fast rates for support vector machine, in Proceedings of the Conference on Learning Theory (COLT-2005), pp. 279–294.

  19. V. Vapnik, The Nature of Statistical Learning Theory (Springer, New York, 1995).

    MATH  Google Scholar 

  20. V. Vapnik, Statistical Learning Theory (Wiley, New York, 1998).

    MATH  Google Scholar 

  21. Q. Wu, D.X. Zhou, SVM soft margin classifiers: Linear programming versus quadratic programming, Neural Comput. 17, 1160–1187 (2005).

    Article  MATH  MathSciNet  Google Scholar 

  22. Q. Wu, Y. Ying, D.X. Zhou, Learning rates of least-square regularized regression, Found. Comput. Math. 6, 171–192 (2006).

    Article  MATH  MathSciNet  Google Scholar 

  23. D.X. Zhou, The covering number in learning theory, J. Complex. 18, 739–767 (2002).

    Article  MATH  Google Scholar 

  24. D.X. Zhou, Capacity of reproducing kernel spaces in learning theory, IEEE Trans. Inf. Theory 49, 1743–1752 (2003).

    Article  Google Scholar 

  25. D.X. Zhou, K. Jetter, Approximation with polynomial kernels and SVM classifiers, Adv. Comput. Math. 25, 323–344 (2006).

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongzhi Tong.

Additional information

Communicated by Felipe Cucker.

Research supported by NNSF of China No. 10471002, No. 10571010 and RFDP of China No. 20060001010.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Tong, H., Chen, DR. & Peng, L. Analysis of Support Vector Machines Regression. Found Comput Math 9, 243–257 (2009). https://doi.org/10.1007/s10208-008-9026-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10208-008-9026-0

Keywords

Mathematics Subject Classification (2000)