Dai, Yuhong; Yuan, Yaxiang A nonlinear conjugate gradient method with a strong global convergence property. (English) Zbl 0957.65061 SIAM J. Optim. 10, No. 1, 177-182 (1999). The paper presents a new version of the conjugate gradient method \(x_{k+1}=x_k +\alpha_k d_k, d_{k+1}=-f''(x_{k+1})+ \beta_k d_k, \beta_k=\|f''(x_{k+1})\|^2/ d_k^T (f''(x_{k+1})-f''(x_k))\) for solving unconstrained optimization problem \(\min_{x\in \mathbb R^n} f(x)\). Unlike the classical stepsize rule \(\alpha_k =\text{argmin}\{ f(x_k+\alpha d_k): \alpha \geq 0 \}\) the authors analyse a scheme in which \(\alpha_k\) is chosen arbitrarily subject to the conditions \(f(x_k)-f(x_k+\alpha_k d_k) \geq -\delta \alpha_k f^{\prime T}(x_k) d_k\) and \(f^{\prime T}(x_k+\alpha_k d_k) d_k > \sigma f^{\prime T}(x_k) d_k; 0<\delta <\sigma <1\). The convergence \(f''(x_k) \to 0\) is established provided that \(f(x)\) is bounded below on \(N=\{x \in \mathbb R^n: f(x) \leq f(x_1)\}\) and \(f''(x)\) is Lipschitz continuous on \(N\). The boundedness of \(N\) is not assumed. Reviewer: Mikhail Yu.Kokurin (Yoshkar-Ola) Cited in 13 ReviewsCited in 392 Documents MSC: 65K05 Numerical mathematical programming methods 90C30 Nonlinear programming 90C53 Methods of quasi-Newton type Keywords:unconstrained optimization; conjugate gradient method; Wolfe conditions; global convergence Software:CG_DESCENT; CUTEr; L-BFGS PDF BibTeX XML Cite \textit{Y. Dai} and \textit{Y. Yuan}, SIAM J. Optim. 10, No. 1, 177--182 (1999; Zbl 0957.65061) Full Text: DOI