A block symmetric Gauss-Seidel decomposition theorem for convex composite quadratic programming and its applications. (English) Zbl 1412.90086

Summary: For a symmetric positive semidefinite linear system of equations \(\mathcal{Q}{{x}}= {{{b}}}\), where \({{x}}= (x_1,\ldots ,x_s)\) is partitioned into \(s\) blocks, with \(s \ge 2\), we show that each cycle of the classical block symmetric Gauss-Seidel (sGS) method exactly solves the associated quadratic programming (QP) problem but added with an extra proximal term of the form \(\frac{1}{2}\Vert {{x}}-{{x}}^k\Vert _\mathcal{T}^2\), where \(\mathcal{T}\) is a symmetric positive semidefinite matrix related to the sGS decomposition of \(\mathcal{Q}\) and \({{x}}^k\) is the previous iterate. By leveraging on such a connection to optimization, we are able to extend the result (which we name as the block sGS decomposition theorem) for solving convex composite QP (CCQP) with an additional possibly nonsmooth term in \(x_1\), i.e., \(\min \{ p(x_1) + \frac{1}{2}\langle {{x}},\,\mathcal{Q}{{x}}\rangle -\langle {{{b}}},\,{{x}}\rangle \}\), where \(p(\cdot )\) is a proper closed convex function. Based on the block sGS decomposition theorem, we extend the classical block sGS method to solve CCQP. In addition, our extended block sGS method has the flexibility of allowing for inexact computation in each step of the block sGS cycle. At the same time, we can also accelerate the inexact block sGS method to achieve an iteration complexity of \(O(1/k^2)\) after performing \(k\) cycles. As a fundamental building block, the block sGS decomposition theorem has played a key role in various recently developed algorithms such as the inexact semiproximal ALM/ADMM for linearly constrained multi-block convex composite conic programming (CCCP), and the accelerated block coordinate descent method for multi-block CCCP.


90C06 Large-scale problems in mathematical programming
90C20 Quadratic programming
90C25 Convex programming
65F10 Iterative numerical methods for linear systems
Full Text: DOI arXiv


[1] Axelsson, O.: Iterative Solution Methods. Cambridge University Press, Cambridge (1994) · Zbl 0795.65014
[2] Bai, MR; Zhang, XJ; Ni, GY; Cui, CF, An adaptive correction approach for tensor completion, SIAM J. Imaging Sci., 9, 1298-1323, (2016) · Zbl 1456.90104
[3] Bai, S.; Qi, H-D, Tackling the flip ambiguity in wireless sensor network localization and beyond, Digital Signal Process., 55, 85-97, (2016)
[4] Bank, RE; Dupont, TF; Yserentant, H., The hierarchical basis multigrid method, Numerische Mathematik, 52, 427-458, (1988) · Zbl 0645.65074
[5] Beck, A.; Tetruashvili, L., On the convergence of block coordinate descent type methods, SIAM J. Optim., 23, 2037-2060, (2013) · Zbl 1297.90113
[6] Bertsekas, D.P.: Nonlinear Programming, 2nd edn. Athena Scientific, Belmont (1995) · Zbl 0935.90037
[7] Bi, S., Pan, S., Sun, D. F.: Multi-stage convex relaxation approach to noisy structured low-rank matrix recovery, arXiv:1703.03898 (2017)
[8] Ding, C.; Qi, H-D, Convex optimization learning of faithful Euclidean distance representations in nonlinear dimensionality reduction, Math. Program., 164, 341-381, (2017) · Zbl 1391.90472
[9] Ding, C.; Qi, H-D, Convex Euclidean distance embedding for collaborative position localization with NLOS mitigation, Comput. Optim. Appl., 66, 187-218, (2017) · Zbl 1369.90120
[10] Chen, L.; Sun, DF; Toh, K-C, An efficient inexact symmetric Gauss-Seidel based majorized ADMM for high-dimensional convex composite conic programming, Math. Program., 161, 237-270, (2017) · Zbl 1356.90105
[11] Fercoq, O.; Richtárik, P., Accelerated, parallel, and proximal coordinate descent, SIAM J. Optim., 25, 1997-2023, (2015) · Zbl 1327.65108
[12] Fercoq, O.; Richtárik, P., Optimization in high dimensions via accelerated, parallel, and proximal coordinate descent, SIAM Rev., 28, 739-771, (2016) · Zbl 1353.65053
[13] Ferreira, J. B., Khoo, Y., Singer, A.: Semidefinite programming approach for the quadratic assignment problem with a sparse graph, arXiv:1703.09339 (2017) · Zbl 1415.90071
[14] Freund, R. W.: Preconditioning of symmetric, but highly indefinite linear systems, In: Proceedings of 15th imacs world congress on scientific computation modelling and applied mathematics, Berlin, Germany, pp. 551-556 (1997)
[15] Greenbaum, A.: Iterative Methods for Solving Linear Systems. SIAM, Philadelphia (1997) · Zbl 0883.65022
[16] Grippo, L.; Sciandrone, M., On the convergence of the block nonlinear Gauss-Seidel method under convex constraints, Oper. Res. Lett., 26, 127-136, (2000) · Zbl 0955.90128
[17] Hackbusch, W.: Iterative Solutions of Large Sparse Systems of Equations. Springer, New York (1994) · Zbl 0789.65017
[18] Han, D., Sun, D. F., Zhang, L.: Linear rate convergence of the alternating direction method of multipliers for convex composite programming, Math. Oper. Res. (2017). https://doi.org/10.1287/moor.2017.0875
[19] Jiang, KF; Sun, DF; Toh, K-C, An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP, SIAM J. Optim., 22, 1042-1064, (2012) · Zbl 1401.90120
[20] Kristian, B.; Sun, HP, Preconditioned Douglas-Rachford splitting methods for convex-concave saddle-point problems, SIAM J. Numer. Anal., 53, 421-444, (2015) · Zbl 1314.65084
[21] Kristian, B.; Sun, HP, Preconditioned Douglas-Rachford algorithms for TV-and TGV-regularized variational imaging problems, J. Math. Imaging Vis., 52, 317-344, (2015) · Zbl 1343.94003
[22] Lam, X.Y., Marron, J.S., Sun, D.F., Toh, K.-C.: Fast algorithms for large scale extended distance weighted discrimination, arXiv:1604.05473. Journal Computational and Graphical Statistics (2016, to appear)
[23] Li, X.D., Sun, D.F., Toh, K.-C.: QSDPNAL: a two-phase augmented Lagrangian method for convex quadratic semidefinite programming, arXiv:1512.08872 (2015)
[24] Li, XD; Sun, DF; Toh, K-C, A Schur complement based semi-proximal ADMM for convex quadratic conic programming and extensions, Math. Program., 155, 333-373, (2016) · Zbl 1342.90134
[25] Li, X.D.: A two-phase augmented Lagrangian method for convex composite quadratic programming, PhD thesis, Department of Mathematics, National University of Singapore (2015)
[26] Luo, Z-Q; Tseng, P., On the linear convergence of descent methods for convex essentially smooth minimization, SIAM J. Control Optim., 30, 408-425, (1992) · Zbl 0756.90084
[27] Luo, Z-Q; Tseng, P., Error bounds and convergence analysis of feasible descent methods: a general approach, Ann. Oper. Res., 46, 157-178, (1993) · Zbl 0793.90076
[28] Nesterov, Y., Efficiency of coordinate descent methods on huge-scale optimization problems, SIAM, J. Optim., 22, 341-362, (2012) · Zbl 1257.90073
[29] Nesterov, Y.; Stich, SU, Efficiency of the accelerated coordinate descent method on structured optimization problems, SIAM J. Optim., 27, 110-123, (2017) · Zbl 1359.90073
[30] Ortega, J.M., Rheinboldt, W.C.: Iterative Solution of Nonlinear Equations in Several Variables. SIAM, Philadelphia (2000) · Zbl 0949.65053
[31] Richtárik, P.; Takáč, M., Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function, Math. Program., 144, 1-38, (2014) · Zbl 1301.65051
[32] Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970) · Zbl 0193.18401
[33] Robinson, SM, Some continuity properties of polyhedral multifunctions, Math. Program. Study, 14, 206-214, (1981) · Zbl 0449.90090
[34] Sadd, Y.: Iterative Methods for Sparse Linear Systems. SIAM, Philadelphia (2003)
[35] Schmidt, M., Le Roux, N., Bach, F.: Convergence rates of inexact proximal-gradient methods for convex optimization. Advances in neural information processing systems (NIPS), pp. 1458-1466 (2011)
[36] Sun, DF; Toh, K-C; Yang, LQ, An efficient inexact ABCD method for least squares semidefinite programming, SIAM J. Optim., 26, 1072-1100, (2016) · Zbl 1346.90658
[37] Sun, DF; Toh, K-C; Yang, LQ, A convergent 3-block semi-proximal alternating direction method of multipliers for conic programming with 4-type constraints, SIAM J. Optim., 25, 882-915, (2015) · Zbl 1328.90083
[38] Sun, J.: On monotropic piecewise qudratic programming, PhD Thesis, Department of Mathematics, University of Washington, Seattle (1986)
[39] Tappenden, R.; Richtárik, R.; Gondzio, J., Inexact coordinate descent: complexity and preconditioning, J. Optim. Theory Appl., 170, 144-176, (2016) · Zbl 1350.65062
[40] Tseng, P., Convergence of a block coordinate descent method for nondifferentiable minimization, J. Optim. Theory Appl., 109, 475-494, (2001) · Zbl 1006.65062
[41] Tseng, P.; Yun, S., A coordinate gradient descent method for nonsmooth separable minimization, Math. Program., 125, 387-423, (2010) · Zbl 1166.90016
[42] Varga, R.S.: Matrix Iterative Analysis. Springer, Berlin (2009) · Zbl 1216.65042
[43] Wen, B.; Chen, X.; Pong, TK, Linear convergence of proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth minimization problems, SIAM J. Optim., 27, 124-145, (2017) · Zbl 1359.90138
[44] Xiao, L.; Lu, Z., On the complexity analysis of randomized block-coordinate descent methods, Math. Program., 152, 615-642, (2015) · Zbl 1321.65100
[45] Young, DM, On the accelerated SSOR method for solving large linear systems, Adv. Math., 23, 215-217, (1997) · Zbl 0356.65027
[46] Zhang, X., Xu, C., Zhang, Y., Zhu, T., Cheng, L.: Multivariate regression with grossly corrupted observations: a robust approach and its applications, arXiv:1701.02892 (2017)
[47] Zhou, ZR; So, AM-C, A unified approach to error bounds for structured convex optimization problems, Math. Program., 165, 689-728, (2017) · Zbl 1380.65109
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.