×

General inertial proximal gradient method for a class of nonconvex nonsmooth optimization problems. (English) Zbl 1420.90054

Summary: In this paper, we consider a general inertial proximal gradient method with constant and variable stepsizes for a class of nonconvex nonsmooth optimization problems. The proposed method incorporates two different extrapolations with respect to the previous iterates into the backward proximal step and the forward gradient step in classic proximal gradient method. Under more general parameter constraints, we prove that the proposed method generates a convergent subsequence and each limit point is a stationary point of the problem. Furthermore, the generated sequence is globally convergent to a stationary point if the objective function satisfies the Kurdyka-Łojasiewicz property. Local linear convergence also can be established for the proposed method with constant stepsizes by using a common error bound condition. In addition, we conduct some numerical experiments on nonconvex quadratic programming and SCAD penalty problems to demonstrate the advantage of the proposed method.

MSC:

90C26 Nonconvex programming, global optimization

Software:

UNLocBoX; iPiano; iPiasco
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Alvarez, F.: On the minimizing property of a second order dissipative system in Hilbert spaces. SIAM J. Control Optim. 38(4), 1102-1119 (2000) · Zbl 0954.34053 · doi:10.1137/S0363012998335802
[2] Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9(1-2), 3-11 (2001) · Zbl 0991.65056 · doi:10.1023/A:1011253113155
[3] Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-Łojasiewicz inequality. Math. Oper. Res. 35(2), 438-457 (2010) · Zbl 1214.65036 · doi:10.1287/moor.1100.0449
[4] Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math. Program. Ser. A 137(1-2), 91-129 (2013) · Zbl 1260.49048 · doi:10.1007/s10107-011-0484-9
[5] Attouch, H., Peypouquet, J., Redont, P.: A dynamical approach to an inertial forward – backward algorithm for convex minimization. SIAM J. Optim. 24(1), 232-256 (2014) · Zbl 1295.90044 · doi:10.1137/130910294
[6] Ban, G.-Y., El Karoui, N., Lim, A.E.B.: Machine learning and portfolio optimization. Manage. Sci. 64(3), 1136-1154 (2016) · doi:10.1287/mnsc.2016.2644
[7] Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Berlin (2011) · Zbl 1218.47001 · doi:10.1007/978-1-4419-9467-7
[8] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183-202 (2009) · Zbl 1175.94009 · doi:10.1137/080716542
[9] Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. Ser. A 146(1-2), 459-494 (2014) · Zbl 1297.90125 · doi:10.1007/s10107-013-0701-9
[10] Bolte, J., Sabach, S., Teboulle, M., Vaisbourd, Y.: First order methods beyond convexity and Lipschitz gradient continuity with applications to quadratic inverse problems. SIAM J. Optim. 28(3), 2131-2151 (2018) · Zbl 1402.90118 · doi:10.1137/17M1138558
[11] Boţ, R.I., Csetnek, E.R.: An inertial Tseng’s type proximal algorithm for nonsmooth and nonconvex optimization problems. J. Optim. Theory Appl. 171(2), 600-616 (2014) · Zbl 1349.90688
[12] Boţ, R.I., Csetnek, E.R.: Proximal-gradient algorithms for fractional programming. Optimization 66(8), 1383-1396 (2017) · Zbl 1378.90080 · doi:10.1080/02331934.2017.1294592
[13] Boţ, R.I., Csetnek, E.R., Hendrich, C.: Inertial Douglas-Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 256(1), 472-487 (2015) · Zbl 1338.65145
[14] Boţ, R.I., Csetnek, E.R., László, S.C.: An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions. EURO J. Comput. Optim. 4(1), 3-25 (2016) · Zbl 1338.90311 · doi:10.1007/s13675-015-0045-8
[15] Chambolle, A., Dossal, C.: On the convergence of the iterates of the “fast iterative shrinkage/thresholding algorithm”. J. Optim. Theory Appl. 166(3), 968-982 (2015) · Zbl 1371.65047 · doi:10.1007/s10957-015-0746-4
[16] Chen, C., Chan, R.H., Ma, S., Yang, J.: Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM J. Imaging Sci. 8(4), 2239-2267 (2015) · Zbl 1328.65134 · doi:10.1137/15100463X
[17] Chen, G.H-G., Rockafellar, R.T.: Convergence rates in forward – backward splitting. SIAM J. Optim. 7(2), 421-444 (1997) · Zbl 0876.49009
[18] Chen, X., Peng, J., Zhang, S.: Sparse solutions to random standard quadratic optimization problems. Math. Program. Ser. A 141(1-2), 273-293 (2013) · Zbl 1305.90323 · doi:10.1007/s10107-012-0519-x
[19] Combettes, P.L., Pesquet, J.-C.: Proximal splitting methods in signal processing. In: Bauschke, H.H., Burachik, R.S., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H. (eds.) Fixed-point Algorithms for Inverse Problems in Science and Engineering, pp.185-212. Springer (2011) · Zbl 1242.90160
[20] Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57(11), 1413-1457 (2004) · Zbl 1077.65055 · doi:10.1002/cpa.20042
[21] Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(44), 1289-1306 (2006) · Zbl 1288.94016 · doi:10.1109/TIT.2006.871582
[22] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96(456), 1348-1360 (2001) · Zbl 1073.62547 · doi:10.1198/016214501753382273
[23] Guo, K., Han, D.: A note on the Douglas-Rachford splitting method for optimization problems involving hypoconvex functions. J. Global Optim. 72(3), 431-441 (2018) · Zbl 1412.90107 · doi:10.1007/s10898-018-0660-z
[24] Guo, K., Yuan, X., Zeng, S.: Convergence analysis of ISTA and FISTA for “strongly + semi” convex programming. http://www.optimization-online.org/DB_FILE/2016/06/5506.pdf (2016)
[25] Han, D., Sun, D., Zhang, L.: Linear rate convergence of the alternating direction method of multipliers for convex composite programming. Math. Oper. Res. 43(2), 622-637 (2018) · Zbl 1440.90047 · doi:10.1287/moor.2017.0875
[26] Han, D., Yuan, X.: Local linear convergence of the alternating direction method of multipliers for quadratic programs. SIAM J. Numer. Anal. 51(6), 3446-3457 (2013) · Zbl 1285.90033 · doi:10.1137/120886753
[27] Horst, R., Thoai, N.V.: DC programming: overview. J. Optim. Theory Appl. 103(1), 1-43 (1999) · Zbl 1073.90537 · doi:10.1023/A:1021765131316
[28] Ibaraki, T., Katoh, N.: Resource Allocation Problems: Algorithmic Approaches. MIT Press, Cambridge (1988) · Zbl 0786.90067
[29] Jia, Z., Gao, X., Cai, X., Han, D.: Local linear convergence of the alternating direction method of multipliers for nonconvex separable optimization. Manuscript (2017) · Zbl 1468.90093
[30] Johnstone, P.R., Moulin, P.: Local and global convergence of a general inertial proximal splitting scheme for minimizing composite functions. Comput. Optim. Appl. 67(2), 259-292 (2017) · Zbl 1376.90046 · doi:10.1007/s10589-017-9896-7
[31] Koltchinskii, V., Lounici, K., Tsybakov, A.B., et al.: Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Stat. 39(5), 2302-2329 (2011) · Zbl 1231.62097 · doi:10.1214/11-AOS894
[32] Kurdyka, K.: On gradients of functions definable in o-minimal structures. Ann. I. Fourier 48(3), 769-783 (1998) · Zbl 0934.32009 · doi:10.5802/aif.1638
[33] Lorenz, D.A., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51(2), 311-325 (2015) · Zbl 1327.47063 · doi:10.1007/s10851-014-0523-2
[34] Luo, Z.-Q., Tseng, P.: On the linear convergence of descent methods for convex essentially smooth minimization. SIAM J. Control Optim. 30(2), 408-425 (1992) · Zbl 0756.90084 · doi:10.1137/0330025
[35] Luo, Z.-Q., Tseng, P.: Error bounds and convergence analysis of feasible descent methods: a general approach. Ann. Oper. Res. 46(1), 157-178 (1993) · Zbl 0793.90076 · doi:10.1007/BF02096261
[36] Maingé, P.-E., Gobinddass, M.: Convergence of one-step projected gradient methods for variational inequalities. J. Optim. Theory Appl. 171(1), 146-168 (2016) · Zbl 06661393 · doi:10.1007/s10957-016-0972-4
[37] Moudafi, A., Oliny, M.: Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 155(2), 447-454 (2003) · Zbl 1027.65077 · doi:10.1016/S0377-0427(02)00906-8
[38] Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, Dordrecht (2004) · Zbl 1086.90045 · doi:10.1007/978-1-4419-8853-9
[39] Nesterov, Y.: Gradient methods for minimizing composite functions. Math. Program. Ser. B 140(1), 125-161 (2013) · Zbl 1287.90067 · doi:10.1007/s10107-012-0629-5
[40] Ochs, P., Brox, T., Pock, T.: iPiasco: inertial proximal algorithm for strongly convex optimization. J. Math. Imaging Vis. 53(2), 171-181 (2015) · Zbl 1327.90219 · doi:10.1007/s10851-015-0565-0
[41] Ochs, P., Chen, Y., Brox, T., Pock, T.: iPiano: inertial proximal algorithm for nonconvex optimization. SIAM J. Imaging Sci. 7(2), 1388-1419 (2014) · Zbl 1296.90094 · doi:10.1137/130942954
[42] Palomar, D.P., Eldar, Y.C.: Convex Optimization in Signal Processing and Communications. Cambridge University Press, Cambridge (2010) · Zbl 1200.90009
[43] Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1(3), 127-239 (2014) · doi:10.1561/2400000003
[44] Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1-17 (1964) · Zbl 0147.35301 · doi:10.1016/0041-5553(64)90137-5
[45] Sra, S., Nowozin, S., Wright, S.J.: Optimization for Machine Learning. MIT Press, Cambridge (2012)
[46] Tseng, P., Yun, S.: A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. Ser. B 117(1-2), 387-423 (2009) · Zbl 1166.90016 · doi:10.1007/s10107-007-0170-0
[47] Yang, L.: Proximal gradient method with extrapolation and line search for a class of nonconvex and nonsmooth problems. arXiv:1711.06831v3 (2018)
[48] Wen, B., Chen, X., Pong, T.K.: Linear convergence of proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth minimization problems. SIAM J. Optim. 27(1), 124-145 (2017) · Zbl 1359.90138 · doi:10.1137/16M1055323
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.