×

Convergence rates analysis of a multiobjective proximal gradient method. (English) Zbl 1514.90212

Summary: Many descent algorithms for multiobjective optimization have been developed in the last two decades. H. Tanabe et al. [Comput. Optim. Appl. 72, No. 2, 339–361 (2019; Zbl 1420.90065)] proposed a proximal gradient method for multiobjective optimization, which can solve multiobjective problems, whose objective function is the sum of a continuously differentiable function and a closed, proper, and convex one. Under reasonable assumptions, it is known that the accumulation points of the sequences generated by this method are Pareto stationary. However, the convergence rates were not established in that paper. Here, we show global convergence rates for the multiobjective proximal gradient method, matching what is known in scalar optimization. More specifically, by using merit functions to measure the complexity, we present the convergence rates for non-convex \((O(\sqrt{1/k}))\), convex \((O(1/k))\), and strongly convex \((O(r^k)\) for some \(r \in (0, 1))\) problems. We also extend the so-called Polyak-Łojasiewicz (PL) inequality for multiobjective optimization and establish the linear convergence rate for multiobjective problems that satisfy such inequalities \((O(r^k)\) for some \(r \in (0, 1))\).

MSC:

90C29 Multi-objective and goal programming
90C52 Methods of reduced gradient type

Citations:

Zbl 1420.90065
PDFBibTeX XMLCite
Full Text: DOI arXiv

References:

[1] Beck, A.: First-order methods in optimization. Society for Industrial and Applied Mathematics (2017). doi:10.1137/1.9781611974997 · Zbl 1384.65033
[2] Beck, A.; Teboulle, M., A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imag. Sci., 2, 1, 183-202 (2009) · Zbl 1175.94009
[3] Bello-Cruz, Y.; Melo, JG; Serra, RV, A proximal gradient splitting method for solving convex vector optimization problems, Optimization (2020) · Zbl 1486.90143
[4] Bertsekas, DP, Nonlinear programming (1999), Belmont, Mass: Athena Scientific, Belmont, Mass · Zbl 1015.90077
[5] Boţ, RI; Grad, SM, Inertial forward-backward methods for solving vector optimization problems, Optimization, 67, 7, 959-974 (2018) · Zbl 1402.90163
[6] Bonnel, H.; Iusem, AN; Svaiter, BF, Proximal methods in vector optimization, SIAM J. Optim., 15, 4, 953-970 (2005) · Zbl 1093.90054
[7] Calderón, L.; Diniz-Ehrhardt, MA; Martínez, JM, On high-order model regularization for multiobjective optimization, Optim. Methods. Softw. (2020) · Zbl 1501.90088
[8] Carrizo, GA; Lotito, PA; Maciel, MC, Trust region globalization strategy for the nonconvex unconstrained multiobjective optimization problem, Math. Program., 159, 1-2, 339-369 (2016) · Zbl 1345.90081
[9] Custódio, AL; Madeira, JF; Vaz, AI; Vicente, LN, Direct multisearch for multiobjective optimization, SIAM J. Optim., 21, 3, 1109-1140 (2011) · Zbl 1230.90167
[10] Fliege, J.; Graña Drummond, LM; Svaiter, BF, Newton’s method for multiobjective optimization, SIAM J. Optim., 20, 2, 602-626 (2009) · Zbl 1195.90078
[11] Fliege, J.; Svaiter, BF, Steepest descent methods for multicriteria optimization, Math. Methods Oper. Res., 51, 3, 479-494 (2000) · Zbl 1054.90067
[12] Fliege, J.; Vaz, AI; Vicente, LN, Complexity of gradient descent for multiobjective optimization, Optim. Methods. Softw., 34, 5, 949-959 (2019) · Zbl 1429.90067
[13] Fukuda, EH; Graña Drummond, LM, A survey on multiobjective descemt methods, Pesquisa Operacional, 34, 3, 585-620 (2014)
[14] Fukushima, M.; Mine, H., A generalized proximal point algorithm for certain non-convex minimization problems, Int. J. Syst. Sci., 12, 8, 989-1000 (1981) · Zbl 0467.65028
[15] Graña Drummond, LM; Iusem, AN, A projected gradient method for vector optimization problems, Comput. Optim. Appl., 28, 1, 5-29 (2004) · Zbl 1056.90126
[16] Grapiglia, GN; Yuan, J.; Yuan, YX, On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization, Math. Program., 152, 1-2, 491-520 (2015) · Zbl 1319.90065
[17] Hoffman, AJ, On approximate solutions of systems of linear inequalities, J. Res. Natl. Bur. Stand., 49, 4, 263-265 (1952)
[18] Karimi, H., Nutini, J., Schmidt, M.: Linear convergence of gradient and proximal-gradient methods under the Polyak-Łojasiewicz condition. In: P. Frasconi, N. Landwehr, G. Manco, J. Vreeken (eds.) Machine Learning and Knowledge Discovery in Databases, pp. 795-811. Springer International Publishing, Cham (2016). doi:10.1007/978-3-319-46128-1_50
[19] Lucambio Pérez, LR; Prudente, LF, Nonlinear conjugate gradient methods for vector optimization, SIAM J. Optim., 28, 3, 2690-2720 (2018) · Zbl 1401.90210
[20] Nesterov, Y.: Introductory lectures on convex optimization: A basic course. Kluwer Academic Publishers, Dordrecht (2004). doi:10.1007/978-1-4419-8853-9 · Zbl 1086.90045
[21] Polyak, B., Gradient methods for minimizing functionals (in Russian), Zh. Vychisl. Mat. Mat. Fiz., 3, 4, 643-653 (1963) · Zbl 0196.47701
[22] Sion, M., On general minimax theorems, Pacific J. Math., 8, 1, 171-176 (1958) · Zbl 0081.11502
[23] Tanabe, H.; Fukuda, EH; Yamashita, N., Proximal gradient methods for multiobjective optimization and their applications, Comput. Optim. Appl., 72, 2, 339-361 (2019) · Zbl 1420.90065
[24] Tanabe, H., Fukuda, E.H., Yamashita, N.: New merit functions for multiobjective optimization and their properties. arXiv: 2010.09333 (2022)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.