On the behavior of the gradient norm in the steepest descent method. (English) Zbl 1008.90057

Summary: It is well known that the norm of the gradient may be unreliable as a stopping test in unconstrained optimization, and that it often exhibits oscillations in the course of the optimization. We present results descibing the properties of the gradient norm for the steepest descent method applied to quadratic objective functions. We also make some general observations that apply to nonlinear problems, relating the gradient norm, the objective function value, and the path generated by the iterates.


90C30 Nonlinear programming
Full Text: DOI