A survey of nonlinear conjugate gradient methods. (English) Zbl 1117.90048

This is a very nice paper which focuses on conjugate gradient methods applied to the nonlinear unconstrained optimization problem \(\min_{x\in\mathbb{R}^n}\,f(x)\), where \(f: \mathbb{R}^n\to\mathbb{R}\) is a continuously differentiable function, bounded from below. It is known that these methods are characterized by low memory requirements and strong local and global convergence properties.
The main goal is to analyze global convergence properties of conjugate gradient methods. In Section 2, classical line search criteria based on the Wolfe condition are discussed. Section 3 summarizes how the initial search direction affects global convergence. Section 4 and Section 5 discuss the global convergence properties of conjugate gradient methods depending on the numerator. In Section 6, hybrid methods are introduced obtained by dynamically adjusting the formula for the conjugate gradient update parameter as the iterations evolve, making use of methods from Sections 4 and 5. At the end of the paper, preconditioning techniques for conjugate gradient algorithms are discussed.


90C06 Large-scale problems in mathematical programming
90C26 Nonconvex programming, global optimization
90C52 Methods of reduced gradient type
65Y20 Complexity and performance of numerical algorithms