Bayesian trigonometric support vector classifier. (English) Zbl 1085.68620

Summary: This letter describes Bayesian techniques for support vector classification. In particular, we propose a novel differentiable loss function, called the trigonometric loss function, which has the desirable characteristic of natural normalization in the likelihood function, and then follow standard gaussian processes techniques to set up a Bayesian framework. In this framework, Bayesian inference is used to implement model adaptation, while keeping the merits of support vector classifier, such as sparseness and convex programming. This differs from standard gaussian processes for classification. Moreover, we put forward class probability in making predictions. Experimental results on benchmark data sets indicate the usefulness of this approach.


68T05 Learning and adaptive systems in artificial intelligence
Full Text: DOI


[1] DOI: 10.1023/A:1009715923555 · Zbl 05470543
[2] DOI: 10.1137/0916069 · Zbl 0836.65080
[3] DOI: 10.1016/0370-2693(87)91197-X
[4] DOI: 10.1162/089976601300014493 · Zbl 1085.68629
[5] DOI: 10.1109/72.870047
[6] DOI: 10.1007/s101070050090 · Zbl 0946.90059
[7] DOI: 10.1162/neco.1992.4.3.448
[8] MacKay D. J. C., Models of Neural Networks 3 pp 211– (1994)
[9] DOI: 10.1023/A:1007618119488 · Zbl 0969.68128
[10] DOI: 10.1023/A:1012489924661 · Zbl 0998.68098
[11] DOI: 10.1109/34.735807
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.