Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

A Sparse Sampling Method for Classification Based on Likelihood Factor

  • Conference paper
Advances in Neural Networks - ISNN 2008 (ISNN 2008)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5264))

Included in the following conference series:

  • 2958 Accesses

Abstract

The disadvantages of large computing and complex discriminant function involved in classical SVM emerged when the scale of training data was larger. In this paper, a method for classification based on sparse sampling is proposed. A likelihood factor which can indicate the importance of sample is defined. According to the likelihood factor, non-important samples are cliped and misjudged samples are revised, this is called sparse sampling. Sparse sampling can reduce the number of the training samples and the number of the support vectors. So the improved classification method has advantages in reducing computational complexity and simplifying discriminant function.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Vapnik, V.N.: The Nature of Statistical Learning Theorem. Springer, Berlin (1995)

    Google Scholar 

  2. Vapnik, V.N.: Statistical Learning Theory. John Wiley and Sons, New York (1998)

    MATH  Google Scholar 

  3. Zhang, X.G.: Introduction to Statistical Learning Theory and Support Vector Machines. Acta Automatica Sinica 26, 32–41 (2000)

    MathSciNet  Google Scholar 

  4. Christopher, J.C.B.: A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery 2, 121–167 (1998)

    Article  Google Scholar 

  5. Alex, J.S., Bernhard, S.: A Tutorial on Support Vector Regression. Statistics and Computing 14, 199–222 (2004)

    Article  MathSciNet  Google Scholar 

  6. Vapnik, V.N., Mukherjee, S.: Support Vector Method for Multivariate Density Estimation. In: Advances in Neural Information Processing Systems. MIT Press, Cambridge (1999)

    Google Scholar 

  7. Li, X.Y., Zhang, X.F., Shen, L.S.: Some Developments on Support Vector Machine. Chinese Journal of Electronics 25, 7–12 (2006)

    Google Scholar 

  8. Zhang, X.G.: Using Class-center Vectors to Build Support Vector Machines. In: Neural Networks for Signal Processing IX-Proceedings of the 1999 IEEE Workshop, pp. 3–11. IEEE, Wisconsin (1999)

    Chapter  Google Scholar 

  9. Zheng, D.N.: Research on Kernel Methods in Machine Learning. PH.D thesis Tsinghua University vol. 17–67 (2006)

    Google Scholar 

  10. Lin, Y.: Support Vector Machines and the Bayes Rule in Classification. Data Mining and Knowledge Discovery 6, 259–275 (2002)

    Article  MathSciNet  Google Scholar 

  11. Steinwart, I.: Support Vector Machines Are Universally Consistent. J. Complexity 18, 768–791 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  12. Wu, Q., Zhou, D.X.: Analysis of Support Vector Machine Classification (preprint, 2004)

    Google Scholar 

  13. Richard, O.D., Peter, E.H., David, G.S.: Pattern Classification, 2nd edn. ISBN:0-471-05669-3

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ding, L., Sun, F., Wang, H., Chen, N. (2008). A Sparse Sampling Method for Classification Based on Likelihood Factor. In: Sun, F., Zhang, J., Tan, Y., Cao, J., Yu, W. (eds) Advances in Neural Networks - ISNN 2008. ISNN 2008. Lecture Notes in Computer Science, vol 5264. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87734-9_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-87734-9_31

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-87733-2

  • Online ISBN: 978-3-540-87734-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics