×

zbMATH — the first resource for mathematics

A gradient descent method for solving a system of nonlinear equations. (English) Zbl 1454.65033
Summary: This paper develops a gradient descent (GD) method for solving a system of nonlinear equations with an explicit formulation. We theoretically prove that the GD method has linear convergence in general and, under certain conditions, is equivalent to Newton’s method locally with quadratic convergence. A stochastic version of the gradient descent is also proposed for solving large-scale systems of nonlinear equations. Finally, several benchmark numerical examples are used to demonstrate the feasibility and efficiency compared to Newton’s method.
MSC:
65H10 Numerical computation of solutions to systems of equations
90C53 Methods of quasi-Newton type
Software:
Bertini; Adam; KELLEY
PDF BibTeX Cite
Full Text: DOI
References:
[1] Dennis, J.; Schnabel, R., Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Vol. 16 (1996), SIAM
[2] Hao, W.; Hauenstein, J.; Hu, B.; Sommese, A., A bootstrapping approach for computing multiple solutions of differential equations, J. Comput. Appl. Math., 258, 181-190 (2014) · Zbl 1294.65085
[3] Atkinson, K., A survey of numerical methods for solving nonlinear integral equations, J. Integral Equ. Appl., 4, 1, 15-46 (1992) · Zbl 0760.65118
[4] Hao, W.; Harlim, J., An equation-by-equation method for solving the multidimensional moment constrained maximum entropy problem, Commun. Appl. Math. Comput. Sci., 13, 2, 189-214 (2018) · Zbl 1409.65034
[5] Hao, W., A homotopy method for parameter estimation of nonlinear differential equations with multiple optima, J. Sci. Comput., 74, 3, 1314-1324 (2018) · Zbl 1398.65340
[6] Kelley, C., Iterative Methods for Optimization (1999), SIAM · Zbl 0934.90082
[7] Björck, Å., Numerical Methods for Least Squares Problems (1996), SIAM
[8] Chen, Q.; Hao, W., A homotopy training algorithm for fully connected neural networks, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 475, 2231 (2019), 20190662
[9] Bates, D.; Sommese, A.; Hauenstein, J.; Wampler, C., Numerically Solving Polynomial Systems with Bertini (2013), SIAM · Zbl 1295.65057
[10] Blaschke, B.; Neubauer, A.; Scherzer, O., On convergence rates for the iteratively regularized Gauss-Newton method, IMA J. Numer. Anal., 17, 3, 421-436 (1997) · Zbl 0881.65050
[11] Deuflhard, P., Global inexact Newton methods for very large scale nonlinear problems, IMPACT Comput. Sci. Eng., 3, 4, 366-393 (1991) · Zbl 0745.65032
[12] Dembo, R.; Eisenstat, S.; Steihaug, T., Inexact newton methods, SIAM J. Numer. Anal., 19, 2, 400-408 (1982) · Zbl 0478.65030
[13] Knoll, D.; Keyes, D., Jacobian-free Newton-Krylov methods: a survey of approaches and applications, J. Comput. Phys., 193, 2, 357-397 (2004) · Zbl 1036.65045
[14] Coakley, D.; Raftery, P.; Keane, M., A review of methods to match building energy simulation models to measured data, Renew. Sustain. Energy Rev., 37, 123-141 (2014)
[15] Sra, S.; Nowozin, S.; Wright, S., Optimization for Machine Learning (2012), MIT Press
[16] Bottou, L., Large-scale machine learning with stochastic gradient descent, (Proceedings of COMPSTAT’2010 (2010), Springer), 177-186 · Zbl 1436.68293
[17] Kingma, D.; Ba, J., Adam: A method for stochastic optimization (2014), arXiv preprint arXiv:1412.6980
[18] Griffin, Z.; Hauenstein, J., Real solutions to systems of polynomial equations and parameter continuation, Adv. Geom., 15, 2, 173-187 (2015) · Zbl 1309.65056
[19] Nocedal, J.; Wright, S., Numerical Optimization (2006), Springer Science & Business Media · Zbl 1104.65059
[20] Yuan, Y., Step-sizes for the gradient method, AMS IP Stud. Adv. Math., 42, 2, 785 (2008) · Zbl 1172.90509
[21] Barzilai, J.; Borwein, J., Two-point step size gradient methods, IMA J. Numer. Anal., 8, 1, 141-148 (1988) · Zbl 0638.65055
[22] Curry, H., The method of steepest descent for non-linear minimization problems, Quart. Appl. Math., 2, 3, 258-261 (1944) · Zbl 0061.26801
[23] Dai, Y., Alternate step gradient method, Optimization, 52, 4-5, 395-415 (2003) · Zbl 1056.65055
[24] Raydan, M.; Svaiter, B., Relaxed steepest descent and Cauchy-Barzilai-Borwein method, Comput. Optim. Appl., 21, 2, 155-167 (2002) · Zbl 0988.90049
[25] Bottou, L.; Curtis, F.; Nocedal, J., Optimization methods for large-scale machine learning, SIAM Rev., 60, 2, 223-311 (2018) · Zbl 1397.65085
[26] Chen, Q.; Hao, W., A randomized Newton’s method for solving differential equations based on the neural network discretization (2019)
[27] Zeng, Z., A Newton’s iteration converges quadratically to nonisolated solutions too (2019)
[28] Ninomiya, H.; Asai, H., Orthogonalized steepest descent method for solving nonlinear equations, (Proceedings of ISCAS’95-International Symposium on Circuits and Systems, Vol. 1 (1995), IEEE), 740-743
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.