Abstract
The disadvantages of large computing and complex discriminant function involved in classical SVM emerged when the scale of training data was larger. In this paper, a method for classification based on sparse sampling is proposed. A likelihood factor which can indicate the importance of sample is defined. According to the likelihood factor, non-important samples are cliped and misjudged samples are revised, this is called sparse sampling. Sparse sampling can reduce the number of the training samples and the number of the support vectors. So the improved classification method has advantages in reducing computational complexity and simplifying discriminant function.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Vapnik, V.N.: The Nature of Statistical Learning Theorem. Springer, Berlin (1995)
Vapnik, V.N.: Statistical Learning Theory. John Wiley and Sons, New York (1998)
Zhang, X.G.: Introduction to Statistical Learning Theory and Support Vector Machines. Acta Automatica Sinica 26, 32–41 (2000)
Christopher, J.C.B.: A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery 2, 121–167 (1998)
Alex, J.S., Bernhard, S.: A Tutorial on Support Vector Regression. Statistics and Computing 14, 199–222 (2004)
Vapnik, V.N., Mukherjee, S.: Support Vector Method for Multivariate Density Estimation. In: Advances in Neural Information Processing Systems. MIT Press, Cambridge (1999)
Li, X.Y., Zhang, X.F., Shen, L.S.: Some Developments on Support Vector Machine. Chinese Journal of Electronics 25, 7–12 (2006)
Zhang, X.G.: Using Class-center Vectors to Build Support Vector Machines. In: Neural Networks for Signal Processing IX-Proceedings of the 1999 IEEE Workshop, pp. 3–11. IEEE, Wisconsin (1999)
Zheng, D.N.: Research on Kernel Methods in Machine Learning. PH.D thesis Tsinghua University vol. 17–67 (2006)
Lin, Y.: Support Vector Machines and the Bayes Rule in Classification. Data Mining and Knowledge Discovery 6, 259–275 (2002)
Steinwart, I.: Support Vector Machines Are Universally Consistent. J. Complexity 18, 768–791 (2002)
Wu, Q., Zhou, D.X.: Analysis of Support Vector Machine Classification (preprint, 2004)
Richard, O.D., Peter, E.H., David, G.S.: Pattern Classification, 2nd edn. ISBN:0-471-05669-3
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ding, L., Sun, F., Wang, H., Chen, N. (2008). A Sparse Sampling Method for Classification Based on Likelihood Factor. In: Sun, F., Zhang, J., Tan, Y., Cao, J., Yu, W. (eds) Advances in Neural Networks - ISNN 2008. ISNN 2008. Lecture Notes in Computer Science, vol 5264. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87734-9_31
Download citation
DOI: https://doi.org/10.1007/978-3-540-87734-9_31
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-87733-2
Online ISBN: 978-3-540-87734-9
eBook Packages: Computer ScienceComputer Science (R0)