Summary: Recently, similar to [W. W. Hager
and H. Zhang
, SIAM J. Optim. 16, No. 1, 170–192 (2005; Zbl 1093.90085
)], G. H. Yu
[Nonlinear self-scaling conjugate gradient methods for large-scale optimization problems. PhD-Thesis, Sun Yat-Sen University (2007)] and G. Yuan
[Optim. Lett. 3, No. 1, 11–21 (2009; Zbl 1154.90623
)] proposed modified PRP conjugate gradient methods which generate sufficient descent directions without any line searches. In order to obtain the global convergence of their algorithms, they need the assumption that the stepsize is bounded away from zero. In this paper, we take a little modification to these methods such that the modified methods retain sufficient descent property. Without requirement of the positive lower bound of the stepsize, we prove that the proposed methods are globally convergent. Some numerical results are also reported.