×

Optimal step length for the Newton method: case of self-concordant functions. (English) Zbl 1483.90179

Summary: The theoretical foundation of path-following methods is the performance analysis of the (damped) Newton step on the class of self-concordant functions. However, the bounds available in the literature and used in the design of path-following methods are not optimal. In this contribution we use methods of optimal control theory to compute the optimal step length of the Newton method on the class of self-concordant functions, as a function of the initial Newton decrement, and the resulting worst-case decrease of the decrement. The exact bounds are expressed in terms of solutions of ordinary differential equations which cannot be integrated explicitly. We provide approximate numerical and analytic expressions which are accurate enough for use in optimization methods. Consequently, the neighbourhood of the central path in which the iterates of path-following methods are required to stay can be enlarged, enabling faster progress along the central path during each iteration and hence fewer iterations to achieve a given accuracy.

MSC:

90C51 Interior-point methods
90C60 Abstract computational complexity for mathematical programming problems

Software:

PESTO
PDFBibTeX XMLCite
Full Text: DOI arXiv HAL

References:

[1] Bellman R (2003) Dynamic programming. Dover · Zbl 1029.90076
[2] Burdakov OP (1980) Some globally convergent modifications of Newton’s method for solving systems of linear equations. Soviet Math Dokl 22(2):376-379 · Zbl 0471.65029
[3] Gao, W.; Goldfarb, D., Quasi-Newton methods: superlinear convergence without line searches for self-concordant functions, Optim Methods Softw, 34, 1, 194-217 (2019) · Zbl 1409.90091 · doi:10.1080/10556788.2018.1510927
[4] Hildebrand, R., Optimal inequalities between distances in convex projective domains, J Geom Anal, 31, 11, 11357-11385 (2021) · Zbl 1515.53013 · doi:10.1007/s12220-021-00684-3
[5] de Klerk, E.; Glineur, F.; Taylor, A., On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions, Optim Lett, 11, 1185-1199 (2017) · Zbl 1381.90067 · doi:10.1007/s11590-016-1087-4
[6] de Klerk E, Glineur F, Taylor A (2020) Worst-case convergence analysis of inexact gradient and Newton methods through semidefinite programming performance estimation. SIAM J Optim 30(3):2053-2082 · Zbl 1448.90070
[7] Nesterov Y (2018) Lectures on convex optimization, Springer optimization and its applications, 2nd edn, vol 137. Springer · Zbl 1427.90003
[8] Nesterov, Y.; Nemirovskii, A., Interior-point polynomial algorithms in convex programming, SIAM studies in applied mathematics (1994), Philadelphia: SIAM, Philadelphia · Zbl 0824.90112 · doi:10.1137/1.9781611970791
[9] Pontryagin, L.; Boltyanskii, V.; Gamkrelidze, R.; Mischchenko, E., The mathematical theory of optimal processes (1962), New York: Wiley, New York · Zbl 0102.32001
[10] Ralph D (1994) Global convergence of damped Newton’s method for nonsmooth equations via the path search. Math Oper Res 19(2):352-389 · Zbl 0819.90102
[11] Renegar, J., A mathematical view of interior-point methods in convex optimization (2001), Philadelphia: SIAM, Philadelphia · Zbl 0986.90075 · doi:10.1137/1.9780898718812
[12] Taylor, AB; Hendrickx, JM; Glineur, F., Exact worst-case performance of first-order methods for composite convex optimization, SIAM J Optim, 27, 3, 1283-1313 (2017) · Zbl 1371.90108 · doi:10.1137/16M108104X
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.