×

A comparative study on large scale kernelized support vector machines. (English) Zbl 1416.68144

Summary: Kernelized support vector machines (SVMs) belong to the most widely used classification methods. However, in contrast to linear SVMs, the computation time required to train such a machine becomes a bottleneck when facing large data sets. In order to mitigate this shortcoming of kernel SVMs, many approximate training algorithms were developed. While most of these methods claim to be much faster than the state-of-the-art solver LIBSVM, a thorough comparative study is missing. We aim to fill this gap. We choose several well-known approximate SVM solvers and compare their performance on a number of large benchmark data sets. Our focus is to analyze the trade-off between prediction error and runtime for different learning and accuracy parameter settings. This includes simple subsampling of the data, the poor-man’s approach to handling large scale problems. We employ model-based multi-objective optimization, which allows us to tune the parameters of learning machine and solver over the full range of accuracy/runtime trade-offs. We analyze (differences between) solvers by studying and comparing the Pareto fronts formed by the two objectives classification error and training time. Unsurprisingly, given more runtime most solvers are able to find more accurate solutions, i.e., achieve a higher prediction accuracy. It turns out that LIBSVM with subsampling of the data is a strong baseline. Some solvers systematically outperform others, which allows us to give concrete recommendations of when to use which solver.

MSC:

68T05 Learning and adaptive systems in artificial intelligence
PDF BibTeX XML Cite
Full Text: DOI

References:

[1] Bischl B, Lang M, Mersmann O, Rahnenführer J, Weihs C (2015) BatchJobs and batchexperiments: abstraction mechanisms for using R in batch environments. J Stat Softw 64(11):1-25. http://www.jstatsoft.org/v64/i11/
[2] Bordes, A.; Ertekin, S.; Weston, J.; Bottou, L., Fast kernel classifiers with online and active learning, J Mach Learn Res, 6, 1579-1619, (2005) · Zbl 1222.68152
[3] Bottou L, Lin C-J (2007) Support vector machine solvers. In: Bottou L, Chapelle O, DeCoste D, Weston J (eds) Large scale kernel machines. MIT Press, Cambridge, MA, pp 301-320. http://leon.bottou.org/papers/bottou-lin-2006
[4] Bousquet O, Bottou L (2008) The tradeoffs of large scale learning. In: Platt JC, Koller D, Singer Y, Roweis ST (eds) Advances in neural information processing systems, vol 20. Curran Associates Inc, Red Hook, NY, pp 161-168. http://papers.nips.cc/paper/3323-the-tradeoffs-of-large-scale-learning.pdf
[5] Chang, Chih-Chung; Lin, Chih-Jen, LIBSVM, ACM Transactions on Intelligent Systems and Technology, 2, 1-27, (2011)
[6] Cortes, C.; Vapnik, V., Support vector machine, Mach Learn, 20, 273-297, (1995) · Zbl 0831.68098
[7] Djuric, N.; Lan, L.; Vucetic, S.; Wang, Z., Budgetedsvm: toolbox for scalable svm approximations, J Mach Learn Res, 14, 3813-3817, (2013) · Zbl 1317.68153
[8] Ehrgott M (2013)Multicriteria optimization, vol 491. Springer Science & Business Media, Berlin
[9] Fan, R-E; Chang, K-W; Hsieh, C-J; Wang, X-R; Lin, C-J, Liblinear: a library for large linear classification, J Mach Learn Res, 9, 1871-1874, (2008) · Zbl 1225.68175
[10] Fine, S.; Scheinberg, K., Efficient svm training using low-rank kernel representations, J Mach Learn Res, 2, 243-264, (2002) · Zbl 1037.68112
[11] Glasmachers, T.; Igel, C., Maximum-gain working set selection for support vector machines, J Mach Learn Res, 7, 1437-1466, (2006) · Zbl 1222.90040
[12] Graf HP, Cosatto E, Bottou L, Durdanovic I, Vapnik V (2004) Parallel support vector machines: the cascade svm. In: NIPS, pp 521-528
[13] Horn, Daniel; Wagner, Tobias; Biermann, Dirk; Weihs, Claus; Bischl, Bernd, Model-Based Multi-objective Optimization: Taxonomy, Multi-Point Proposal, Toolbox and Benchmark, 64-78, (2015), Cham
[14] Igel, C.; Heidrich-Meisner, V.; Glasmachers, T., Shark, J Mach Learn Res, 9, 993-996, (2008) · Zbl 1225.68188
[15] Joachims T (1998) Making large-scale SVM learning practical. In: Schölkopf B, Burges C, Smola A (eds) Advances in kernel methods—support vector learning, chapter 11. MIT Press, Cambridge, pp 169-184
[16] Joachims, Thorsten; Yu, Chun-Nam John, Sparse kernel SVMs via cutting-plane training, Machine Learning, 76, 179-193, (2009) · Zbl 1235.68161
[17] Jones, DR; Schonlau, M.; Welch, WJ, Efficient global optimization of expensive black-box functions, J Glob Optim, 13, 455-492, (1998) · Zbl 0917.90270
[18] Knowles, J., ParEGO: a hybrid algorithm with online landscape approximation for expen-sive multiobjective optimization problems, Evol Comput, 10, 50-66, (2006)
[19] Koch, P.; Bischl, B.; Flasch, O.; Bartz-Beielstein, T.; Weihs, C.; Konen, W., Tuning and evolution of support vector kernels, Evol Intell, 5, 153-170, (2012)
[20] Lin C-J (2001) Linear convergence of a decomposition method for support vector machines. Technical report
[21] Nandan M, Khargonekar PP, Talathi SS (2013) Fast svm training using approximate extreme points. arXiv:1304.1391 · Zbl 1317.68177
[22] Platt J (1998) Fast training of support vector machines using sequential minimal optimization. In: Schölkopf B, Burges C, Smola A (eds) Advances in kernel methods—support vector learning, chapter 12. MIT Press, Cambridge, pp 185-208
[23] Shalev-Shwartz, S.; Singer, Y.; Srebro, N.; Cotter, A., Pegasos: primal estimated sub-gradient solver for svm, Math Program, 127, 3-30, (2011) · Zbl 1211.90239
[24] Steinwart, I., Sparseness of support vector machines, J Mach Learn Res, 4, 1071-1105, (2003) · Zbl 1094.68082
[25] Tsang IW, Kwok JT, Cheung P-M, Cristianini N (2005) Core vector machines: fast SVM training on very large data sets. J Mach Learn Res 6:363-392 · Zbl 1222.68320
[26] Tsang IW, Kocsor A, Kwok JT (2007) Simpler core vector machines with enclosing balls. In: Proceedings of the 24th international conference on machine learning. ACM, New York, NY, USA, pp 911-918
[27] van Rijn JN, Bischl B, Torgo L, Gao B, Umaashankar V, Fischer S, Winter P, Wiswedel B, Berthold MR, Vanschoren J (2013) Openml: a collaborative science platform. In: Machine learning and knowledge discovery in databases. Springer, Berlin, Heidelberg, pp 645-649
[28] Wang, Z.; Crammer, K.; Vucetic, S., Breaking the curse of kernelization: budgeted stochastic gradient descent for large-scale svm training, J Mach Learn Res, 13, 3103-3131, (2012) · Zbl 1433.68383
[29] Williams C, Seeger M (2001) Using the Nyström method to speed up kernel machines. In: Advances in neural information processing systems, vol 13. MIT Press, Cambridge, pp 682-688
[30] Zhang K, Lan L, Wang Z, Moerchen F (2012) Scaling up kernel svm on limited resources: a low-rank linearization approach. In: International conference on artificial intelligence and statistics, pp 1425-1434
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.