×

An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization. (English) Zbl 1341.90102

Summary: We consider optimization problems with an objective function that is the sum of two convex terms: one is smooth and given by a black-box oracle, and the other is general but with a simple, known structure. We first present an accelerated proximal gradient (APG) method for problems where the smooth part of the objective function is also strongly convex. This method incorporates an efficient line-search procedure, and achieves the optimal iteration complexity for such composite optimization problems. In case the strong convexity parameter is unknown, we also develop an adaptive scheme that can automatically estimate it on the fly, at the cost of a slightly worse iteration complexity. Then we focus on the special case of solving the \(\ell _1\)-regularized least-squares problem in the high-dimensional setting. In such a context, the smooth part of the objective (least-squares) is not strongly convex over the entire domain. Nevertheless, we can exploit its restricted strong convexity over sparse vectors using the adaptive APG method combined with a homotopy continuation scheme. We show that such a combination leads to a global geometric rate of convergence, and the overall iteration complexity has a weaker dependency on the restricted condition number than previous work.

MSC:

90C26 Nonconvex programming, global optimization

Software:

FPC_AS; ParNes; PDCO
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Agarwal, A., Negahban, S.N., Wainwright, M.J.: Fast global convergence of gradient methods for high-dimensional statistical recovery. Ann. Stat. 40(5), 2452-2482 (2012) · Zbl 1373.62244
[2] Beck, A., Teboulle, M.: A fast iterative shrinkage-threshold algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183-202 (2009) · Zbl 1175.94009
[3] Bredies, K., Lorenz, D.A.: Linear convergence of iterative soft-thresholding. J. Fourier Anal. Appl. 14, 813-837 (2008) · Zbl 1175.65061
[4] Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51(1), 34-81 (2009) · Zbl 1178.68619
[5] Candès, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inform. Theory 51(12), 4203-4215 (2005) · Zbl 1264.94121
[6] Candès, E.J., Tao, T.: Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inform. Theory 52(12), 5406-5425 (2006) · Zbl 1309.94033
[7] Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory 52(2), 489-509 (2006) · Zbl 1231.94017
[8] Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33-61 (1998) · Zbl 0919.94002
[9] Donoho, D.L.: Compressed sensing. IEEE Trans. Inform. Theory 52(4), 1289-1306 (2006) · Zbl 1288.94016
[10] Gonzaga, C.C., Karas, E.W.: Fine tuning Nesterov’s steepest descent algorithm for differentiable convex programming. Math. Program. Ser. A 138, 141-166 (2013) · Zbl 1297.90118
[11] Gu, M., Lim, L.-H., Wu, C.J.: ParNes: a rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals. Numer. Algorithms 64, 321-347 (2013) · Zbl 1284.65055
[12] Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for \[\ell_1\] ℓ1-minimization: methodology and convergence. SIAM J. Optim. 19(3), 1107-1130 (2008) · Zbl 1180.65076
[13] Li, S., Mo, Q.: New bounds on the restricted isometry constant \[\delta_{2k}\] δ2k. Appl. Comput. Harmon. Anal. 31(3), 460-468 (2011) · Zbl 1231.94027
[14] Luo, Z.-Q., Tseng, P.: On the linear convergence of descent methods for convex essentially smooth minimization. SIAM J. Control Optim. 30(2), 408-425 (1992) · Zbl 0756.90084
[15] Monteiro, R.D.C., Ortiz, C., Svaiter, B.F.: An adaptive accelerated first-order method for convex optimization. Technical report, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA (2012) · Zbl 1344.90049
[16] Nemirovski, A., Yudin, D.: Problem Complexity and Method Efficiency in Optimization. Wiley, New York (1983)
[17] Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer, Boston (2004) · Zbl 1086.90045
[18] Nesterov, Y.: Smooth minimization of nonsmooth functions. Math. Program. 103(1), 127-152 (2005) · Zbl 1079.90102
[19] Nesterov, Y.: How to advance in structural convex optimization. OPTIMA: Math. Program. Soc. Newsl. 78, 2-5 (2008)
[20] Nesterov, Y.: Gradient methods for minimizing composite objective function. Math. Program. Ser. B 140(2007/76), 125-161 (2013) · Zbl 1287.90067
[21] Nesterov, Y., Nemirovski, A.: Interior-Point Polynomial Algorithms in Convex Programming. SIAM Studies in Applied Mathematics. SIAM, Philadelphia (1994) · Zbl 0824.90112
[22] O’Donoghue, B., Candès, E.J.: Adaptive restart for accelerated gradient schemes. Found. Comput. Math. (2013). doi:10.1007/s10208-013-9150-3 · Zbl 1320.90061
[23] Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970) · Zbl 0193.18401
[24] Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R Stat. Soc. Ser. B 58, 267-288 (1996) · Zbl 0850.62538
[25] Tseng, P.: On accelerated proximal gradient methods for convex-concave optimization. Manuscript (2008) · Zbl 1287.90067
[26] Wright, S.J., Nowad, R.D., Figueiredo, M.A.T.: Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 57(7), 2479-2493 (2009) · Zbl 1391.94442
[27] Wright, S.J.: Accelerated block-coordinate relaxation for regularized optimization. SIAM J. Optim. 22(1), 159-186 (2012) · Zbl 1357.49105
[28] Wen, Z., Yin, W., Goldfarb, D., Zhang, Y.: A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation. SIAM J. Sci. Comput. 32(4), 1832-1857 (2010) · Zbl 1215.49039
[29] Xiao, L., Zhang, T.: A proximal-gradient homotopy method for the sparse least-squares problem. SIAM J. Optim. 23, 1062-1091 (2013) · Zbl 1280.65057
[30] Zhang, C.-H., Huang, J.: The sparsity and bias of the lasso selection in high-dimensional linear regression. Ann. Stat. 36, 1567-1594 (2008) · Zbl 1142.62044
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.