An unbiased LSSVM model for classification and regression. (English) Zbl 1191.68604

Summary: Aiming at the common support vector machine’s biased disadvantage and computational complexity, an unbiased least squares support vector machine (LSSVM) model is proposed in this paper. The model eliminates the bias item of LSSVM by improving the form of structure risk, then the unbiased least squares support vector classifier and the unbiased least squares support vector regression are deduced. Based on this model, we design a new learning algorithm using Cholesky factorization according to the characteristic of kernel function matrix, in this way the calculation of Lagrangian multipliers is greatly simplified. Several experiments on diffenert datasets are carried out, including the common datasets classification, synthetic aperture radar image automatic target recognition and chaotic time series prediction. The experimental results of correct recognition rate and the fitting precision testify that the unbiased LSSVM model has good universal ability and fitting accuracy, better generalization capability and stability, and have a great improvement in learning speed.


68T10 Pattern recognition, speech recognition
68T05 Learning and adaptive systems in artificial intelligence
Full Text: DOI


[1] Burges CJC (1998) A tutorial on support vector machines for pattern recognition. Data Mining Knowl Discov 2:121–167 · Zbl 05470543
[2] Campbell C (2000) An introduction to kernel methods. In: Radial basis function networks: design and application. Springer, Berlin
[3] Cesa-Bianchi N, Conconi A, Gentile C (2004) On the generalization ability of on-line learning algorithms. IEEE Trans Inf Theory 50(9):2050–2057 · Zbl 1295.68182
[4] Fan YG, Li P, Song ZH (2006) Dynamic weighted least square support vector machines. Control Decis (in Chinese) 21(10):1129–1133 · Zbl 1117.62101
[5] Fung G, Mangasarian O (2001a) Incremental support vector machine classification. Tech. rep., Data Mining Institute Technical Report 01-08 · Zbl 1075.68633
[6] Fung G, Mangasarian O (2001b) Proximal support vector machine classifiers. In: Proceedings KDD-2001, knowledge discovery and data mining, San Francisco · Zbl 1101.68758
[7] Golub GH, Van-Loan CF (1996) Matrix computations. The Johns Hopkins University Press, Baltimore · Zbl 0865.65009
[8] Gustavo CV, Luis GC, Jordi MM (2006) Composite kernels for hyperspectral image classification. IEEE Geosci Remote Sens Lett 3(1):93–97
[9] Gustavo CV, Jordi MM, José LRA (2007) Nonlinear system identification with composite relevance vector machines. IEEE Signal Process Lett 14(4):279–282
[10] Lau KW, Wu QH (2003) Online training of support vector classifier. Patten Recognit 36(8):1913–1920 · Zbl 1054.68123
[11] Liu QG, He Q, Shi ZZ (2007) Incremental nonlinear proximal support vector machine. In: Proceeding of ISNN’07. LNCS, vol 4493. Springer, Berlin, p 336–341
[12] Müller KR, Mika S, Rätsch G, Tsuda K, Schölkopf B (2001) An introduction to kernel-based learning algorithms. IEEE Trans Neural Netw 12:181–201
[13] Navia-Vázquez A, Pérez-Cruz F, Artés-Rodriguez A, Figueiras-Vidal A (2004a) Advantages of unbiased support vector classifiers for data mining applications. J VLSI Signal Process 37:223–235 · Zbl 1101.68778
[14] Navia-Vázquez A, Pérez-Cruz F, Artés-Rodriguez A, Figueiras-Vidal A (2004b) Unbiased support vector classifiers. In: Proceedings of the IEEE signal processing society workshop, pp 183–192 · Zbl 1101.68778
[15] Ojeda F, Suykens JAK, Moor BD (2008) Low rank updated ls-svm classifiers for fast variable selection. Neural Netw 21:437–449 · Zbl 1254.68218
[16] Sánchez AVD (2003) Advanced support vector machines and kernel methods. Neurocomputing 55:5–20 · Zbl 02060365
[17] Seeger M (2005) Low rank updates for the cholesky decomposition. Technical report. Max Planck Society, Tuebingen
[18] Shilton A, Palaniswami M, Ralph D, Tsoi AC (2005) Incremental training of support vector machines. IEEE Trans Neural Netw 16(1):114–131
[19] Suykens JAK, Vandewalle J (1999) Least square support vector machines classifiers. Neural Process Lett 9(3):293–300 · Zbl 05467879
[20] Suykens JAK, Vandewalle J (2000) Recurrent least squares support vector machines. IEEE Trans Circuits Syst 47(7):1109–1114
[21] Vapnik VN (1995) The nature of statistical learning theory. Springer, New York · Zbl 0833.62008
[22] Vijayakumar S (1999) Sequential support vector classifiers and regression. In: Proceedings of international conference on soft computing, Genoa, Italy, pp 610–619
[23] Vishwanathan SVN, Schraudolph NN, Smola AJ (2006) Step size adaptation in reproducing kernel hilbert space. J Mach Learn Res 7:1107–1133 · Zbl 1222.68325
[24] Wang HQ, Sun FC, Zhao ZT, CaiYN (2007) Sar image atr using svm with a low dimensional combined feature. In: Proceedings of SPIEMIPPR’07, Wuhan, China, vol 6786, p 67862J
[25] Zhang HR,Wang XD (2006) Incremental and online learning algorithm for regression least square vector machine. Chin J Comput 29(3):400–406
[26] Zhang HR, Zhang CJ, Wang XD (2006) A new support vector machine and its learning algorithm. In: Proceedings of the sixth world congress on control and automation, Dalian, China, pp 2820–2824
[27] Zheng DL (2006) Research on kernel methods in machine learning. PhD thesis, Tsinghua University
[28] Zheng DL, Wang JX, Zhao YN (2006a) Non-flat function estimation with a multi-scale support vector regession. Neurocomputing 70:420–429 · Zbl 05184842
[29] Zheng DL, Wang JX, Zhao YN (2006b) Time series predictions using multi-scale support vector regressions. Lect Notes Comput Sci 3935:474–481 · Zbl 1178.68115
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.