×

zbMATH — the first resource for mathematics

On the quadratic convergence of the cubic regularization method under a local error bound condition. (English) Zbl 1411.90284

MSC:
90C26 Nonconvex programming, global optimization
90C30 Nonlinear programming
65K05 Numerical mathematical programming methods
49M15 Newton-type methods
PDF BibTeX XML Cite
Full Text: DOI arXiv
References:
[1] F. J. Aragón Artacho and M. H. Geoffroy, Characterization of metric regularity of subdifferentials, J. Convex Anal., 15 (2008), pp. 365–380. · Zbl 1146.49012
[2] S. Bellavia and B. Morini, Strong local convergence properties of adaptive regularized methods for nonlinear least squares, IMA J. Numer. Anal., 35 (2014), pp. 947–968. · Zbl 1316.65061
[3] R. Bhatia, Matrix Analysis, Grad. Texts in Math. 169, Springer, New York, 1997.
[4] S. Bhojanapalli, B. Neyshabur, and N. Srebro, Global optimality of local search for low rank matrix recovery, in Adv. Neural Inf. Process. Syst. 29, Curran Associates, Red Hook, NY, 2016, pp. 3873–3881.
[5] J. Bolte, T. P. Nguyen, J. Peypouquet, and B. W. Suter, From error bounds to the complexity of first-order descent methods for convex functions, Math. Program., 165 (2017), pp. 471–507. · Zbl 1373.90076
[6] E. J. Candès, The restricted isometry property and its implications for compressed sensing, C. R. Acad. Sci. Paris Ser. I, 346 (2008), pp. 589–592. · Zbl 1153.94002
[7] E. J. Candès, X. Li, and M. Soltanolkotabi, Phase retrieval via Wirtinger flow: Theory and algorithms, IEEE Trans. Inf. Theory, 61 (2015), pp. 1985–2007. · Zbl 1359.94069
[8] E. J. Candès and Y. Plan, Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements, IEEE Trans. Inf. Theory, 57 (2011), pp. 2342–2359. · Zbl 1366.90160
[9] Y. Carmon and J. C. Duchi, Gradient Descent Efficiently Finds the Cubic-Regularized Non-Convex Newton Step, preprint, , 2016.
[10] C. Cartis, N. I. M. Gould, and Ph. L. Toint, On the complexity of steepest descent, Newton’s and regularized Newton’s methods for nonconvex unconstrained optimization problems, SIAM J. Optim., 20 (2010), pp. 2833–2852. · Zbl 1211.90225
[11] C. Cartis, N. I. M. Gould, and Ph. L. Toint, Adaptive cubic regularisation methods for unconstrained optimization. Part I: Motivation, convergence and numerical results, Math. Program., 127 (2011), pp. 245–295. · Zbl 1229.90192
[12] C. Cartis, N. I. M. Gould, and Ph. L. Toint, Adaptive cubic regularisation methods for unconstrained optimization. Part II: Worst-case function- and derivative-evaluation complexity, Math. Program., 130 (2011), pp. 295–319. · Zbl 1229.90193
[13] D. Drusvyatskiy, A. D. Ioffe, and A. S. Lewis, Nonsmooth Optimization Using Taylor-Like Models: Error Bounds, Convergence, and Termination Criteria, preprint, , 2016.
[14] D. Drusvyatskiy, B. S. Mordukhovich, and T. T. Nghia, Second-order growth, tilt stability, and metric regularity of the subdifferential, J. Convex Anal., 21 (2014), pp. 1165–1192. · Zbl 1311.49035
[15] J.-y. Fan and Y.-x. Yuan, On the quadratic convergence of the Levenberg–Marquardt method without nonsingularity assumption, Computing, 74 (2005), pp. 23–39. · Zbl 1076.65047
[16] A. Fischer, Local behavior of an iterative framework for generalized equations with nonisolated solutions, Math. Program., 94 (2002), pp. 91–124. · Zbl 1023.90067
[17] A. Griewank, The Modification of Newton’s Method for Unconstrained Optimization by Bounding Cubic Terms, Technical report, Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, UK, 1981.
[18] P. Hartman, Ordinary Differential Equations, Classics Appl. Math. 38, SIAM, Philadelphia, PA, 2002. · Zbl 1009.34001
[19] K. Levenberg, A method for the solution of certain non-linear problems in least squares, Quart. Appl. Math., 2 (1944), pp. 164–168. · Zbl 0063.03501
[20] D.-H. Li, M. Fukushima, L. Qi, and N. Yamashita, Regularized Newton methods for convex minimization problems with singular solutions, Comput. Optim. Appl., 28 (2004), pp. 131–147. · Zbl 1056.90111
[21] G. Li and T. K. Pong, Calculus of the exponent of Kurdyka–Łojasiewicz inequality and its applications to linear convergence of first-order methods, Found. Comput. Math., 18 (2018), pp. 1199–1232. · Zbl 1405.90076
[22] S. Łojasiewicz, Sur les trajectoires du gradient d’une fonction analytique, Seminari di Geometria (Bologna 1982/83), Università degli Studi di Bologna, Bologna, 1984, pp. 115–117.
[23] D. R. Luke, Phase retrieval, what’s new?, SIAG/OPT Views and News, 25(1) (2017), pp. 1–5.
[24] Z.-Q. Luo and P. Tseng, Error bounds and convergence analysis of feasible descent methods: A general approach, Ann. Oper. Res., 46 (1993), pp. 157–178. · Zbl 0793.90076
[25] D. W. Marquardt, An algorithm for least-squares estimation of nonlinear parameters, J. Soc. Ind. Appl. Math., 11 (1963), pp. 431–441. · Zbl 0112.10505
[26] B. S. Mordukhovich, Variational Analysis and Generalized Differentiation I: Basic Theory, Grundlehren Math. Wiss. 330, Springer, Cham, 2006.
[27] Yu. Nesterov and B. T. Polyak, Cubic regularization of Newton method and its global performance, Math. Program., 108 (2006), pp. 177–205. · Zbl 1142.90500
[28] J. Nocedal and S. J. Wright, Numerical Optimization, Springer, New York, 2006. · Zbl 1104.65059
[29] R. T. Rockafellar and R. J.-B. Wets, Variational Analysis, Grundlehren Math. Wiss. 317, Springer, Berlin, 1998.
[30] Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, Phase retrieval with application to optical imaging: A contemporary overview, IEEE Signal Process. Mag., 32 (2015), pp. 87–109.
[31] J. Sun, Q. Qu, and J. Wright, A geometric analysis of phase retrieval, Found. Comput. Math., 18 (2018), pp. 1131–1198. · Zbl 1401.94049
[32] Ph. L. Toint, Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization, Optim. Methods Softw., 28 (2013), pp. 82–95. · Zbl 1270.90078
[33] P. Tseng, Error bounds and superlinear convergence analysis of some Newton-type methods in optimization, in Nonlinear Optimization and Related Topics, G. D. Pillo and F. Giannessi, eds., Springer, New York, 2000, pp. 445–462. · Zbl 0965.65091
[34] N. Yamashita and M. Fukushima, On the rate of convergence of the Levenberg–Marquardt method, in Topics in Numerical Analysis, G. Alefeld and X. Chen, eds., Comput. Suppl. 15, Springer, Vienna, 2001, pp. 239–249. · Zbl 1001.65047
[35] Y.-x. Yuan, Recent advances in trust region algorithms, Math. Program., 151 (2015), pp. 249–281. · Zbl 1317.65141
[36] M.-C. Yue, Z. Zhou, and A. M.-C. So, A family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo–Tseng error bound property, Math. Program. (2018), .
[37] Y. Zhou and Y. Liang, Characterization of Gradient Dominance and Regularity Conditions for Neural Networks, preprint, , 2017.
[38] Z. Zhou and A. M.-C. So, A unified approach to error bounds for structured convex optimization problems, Math. Program., 165 (2017), pp. 689–728. · Zbl 1380.65109
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.