Step-sizes for the gradient method.

*(English)*Zbl 1172.90509
Lau, Ka-Sing (ed.) et al., Third international congress of Chinese mathematicians. Part 2. Proceedings of the ICCM ’04, Hong Kong, China, December 17–22, 2004. Providence, RI: American Mathematical Society (AMS); Somerville, MA: International Press (ISBN 978-0-8218-4452-6/pbk; 978-0-8218-4416-8/set). AMS/IP Studies in Advanced Mathematics 42, 2, 785-796 (2008).

Summary: The gradient method searches along the steepest descent direction, the opposite direction of the gradient of the objective function. It can ensure a reduction of the objective function as long as the current iterate point is not a stationary point. Different choices of step-sizes lead to various gradient algorithms. However, the exact line search, which seems to be the best gradient method as it chooses the next iterate by achieving the least function value, turns out to be a very bad method because it often converges very slowly. In the recent years lots of researches have been done on how to choose the step-size for the gradient method, following the amazing result by J. Barzilai and J. M. Borwein [IMA J. Numer. Anal. 8, No. 1, 141–148 (1988; Zbl 0638.65055)], where a specific choice for the step-size is given and proved to ensure superlinear convergence for two dimensional convex quadratic problems. In this paper we review some of the recent advances in this very active and interesting subject, and give new step-sizes for the gradient method.

For the entire collection see [Zbl 1135.00010].

For the entire collection see [Zbl 1135.00010].