# zbMATH — the first resource for mathematics

Decomposition algorithm model for singly linearly-constrained problems subject to lower and Upper bounds. (English) Zbl 1168.90556
Summary: Many real applications can be formulated as nonlinear minimization problems with a single linear equality constraint and box constraints. We are interested in solving problems where the number of variables is so huge that basic operations, such as the evaluation of the objective function or the updating of its gradient, are very time consuming. Thus, for the considered class of problems (including dense quadratic programs), traditional optimization methods cannot be applied directly. In this paper, we define a decomposition algorithm model which employs, at each iteration, a descent search direction selected among a suitable set of sparse feasible directions. The algorithm is characterized by an acceptance rule of the updated point which on the one hand permits to choose the variables to be modified with a certain degree of freedom and on the other hand does not require the exact solution of any subproblem. The global convergence of the algorithm model is proved by assuming that the objective function is continuously differentiable and that the points of the level set have at least one component strictly between the lower and upper bounds. Numerical results on large-scale quadratic problems arising in the training of support vector machines show the effectiveness of an implemented decomposition scheme derived from the general algorithm model.

##### MSC:
 90C06 Large-scale problems in mathematical programming
##### Software:
GPDT; LIBSVM; SSVM
Full Text:
##### References:
 [1] Barr, R.O., Gilbert, E.G.: Some efficient algorithms for a class of abstract optimization problems arising in optimal control. IEEE Trans. Autom. Control 14, 640–652 (1969) · doi:10.1109/TAC.1969.1099299 [2] Correa, J.R., Schulz, A.S., Stier Moses, N.E.: Selfish routing in capacitated networks. Math. Oper. Res. 961–976 (2004) · Zbl 1082.90009 [3] Kao, C., Lee, L.-F., Pitt, M.M.: Simulated maximum likelihood estimation of the linear expenditure system with binding non-negativity constraints. Ann. Econ. Finance 2, 203–223 (2001) [4] Meyer, R.R.: Multipoint methods for separable nonlinear networks. Math. Program. Study 22, 185–205 (1985) · Zbl 0553.90037 [5] Bomze, I.M.: Evolution towards the maximum clique. J. Glob. Optim. 10, 143–164 (1997) · Zbl 0880.90110 · doi:10.1023/A:1008230200610 [6] Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, New York (1995) · Zbl 0833.62008 [7] Ferris, M.C., Munson, T.S.: Interior-point methods for massive support vector machines. SIAM J. Optim. 13, 783–804 (2003) · Zbl 1039.90092 · doi:10.1137/S1052623400374379 [8] Ferris, M.C., Munson, T.S.: Semismooth support vector machines. Math. Program. 101, 185–204 (2004) · Zbl 1076.90041 [9] Lee, Y.-J., Mangasarian, O.L.: SSVM: a smooth support vector machine for classification. Comput. Optim. Appl. 20, 5–22 (2001) · Zbl 1017.90105 · doi:10.1023/A:1011215321374 [10] Mangasarian, O.L., Thompson, M.E.: Massive data classification via unconstrained support vector machines. J. Optim. Theory Appl. 131(3), 315–325 (2006) · Zbl 1139.90017 · doi:10.1007/s10957-006-9157-x [11] Mangasarian, O.L., Musicant, D.R.: Active set support vector machine classification. In: Lee, T.K., Dietter, T.G. (eds.) Neural Information Processing Systems 2000 (NIPS 2000), pp. 577–583. MIT Press, Cambridge (2001) [12] Scheinberg, K.: An efficient implementation of an active set method for SVMs. J. Mach. Learn. Res. 7, 2237–2257 (2006) · Zbl 1222.90043 [13] Dai, Yu.-H., Fletcher, R.: New algorithms for singly linearly constrained quadratic programs subject to lower and upper bounds. Math. Program. 106, 403–421 (2006) · Zbl 1134.90030 · doi:10.1007/s10107-005-0595-2 [14] Lin, C.-J.: On the convergence of the decomposition method for support vector machines. IEEE Trans. Neural Netw. 12, 1288–1298 (2001) · Zbl 1012.94501 · doi:10.1109/72.963765 [15] Lucidi, S., Palagi, L., Risi, A., Sciandrone, M.: A convergent decomposition algorithm for support vector machines. Comput. Optim. Appl. 38, 217–234 (2007) · Zbl 1172.90443 · doi:10.1007/s10589-007-9044-x [16] Mangasarian, O.L., Musicant, D.R.: Successive overrelaxation for support vector machines. IEEE Trans. Neural Netw. 10, 1032–1037 (1999) · doi:10.1109/72.788643 [17] Serafini, T., Zanni, L.: On the working set selection in gradient projection-based decomposition techniques for support vector machines. Optim. Methods Softw. 20, 583–596 (2005) · Zbl 1116.90115 · doi:10.1080/10556780500140714 [18] Chang, C.-C., Hsu, C.-W., Lin, C.-J.: The analysis of decomposition methods for support vector machines. IEEE Trans. Neural Netw. 11, 1003–1008 (2000) · doi:10.1109/72.857780 [19] Palagi, L., Sciandrone, M.: On the convergence of a modified version of SVM light algorithm. Optim. Methods Softw. 20, 311–328 (2005) · Zbl 1072.90042 · doi:10.1080/10556780512331318209 [20] Bertsekas, D.P.: Nonlinear Programming, 2nd edn. Athena Scientific, New York (1999) [21] Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines. Software available at http://www.csie.ntu.edu.tw/$$\sim$$cjlin/libsvm (2001) [22] Joachims, T.: Making large scale SVM learning practical. In: Schölkopf, C.B.B., Smola, A. (eds.) Advances in Kernel Methods–Support Vector Learning. MIT Press, Cambridge (1998) [23] Keerthi, S.S., Shevade, S.K., Bhattacharyya, C., Murthy, K.R.K.: A fast iterative nearest point algorithm for support vector machine classifier design. IEEE Trans. Neural Netw. 11, 124–136 (2000) · doi:10.1109/72.822516 [24] Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge University Press, Cambridge (2000) · Zbl 0994.68074 [25] http://www.csie.ntu.edu.tw/$$\sim$$cjlin/libsvmtools/datasets/ . LIBSVM dataset: classification, regression and multi-label
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.