Acceleration of conjugate gradient algorithms for unconstrained optimization. (English) Zbl 1172.65027

A new approach for the acceleration of conjugate gradient methods is presented. The proposed method is based on the modification of the steplength \(\alpha _k\), computed by Wolfe line search conditions, by means of a positive parameter \(\eta _k\), for the improvement of the behavior of the classical conjugate gradient algorithms which are mainly applied to large-scale unconstrained optimization. It is shown that for uniformly convex functions the convergence of the accelerated algorithm is still linear, but the reduction in function values is significantly improved. Numerical comparisons with conjugate gradient algorithms, using a set of 750 unconstrained optimization problems, show that the accelerated computational scheme outperform the corresponding conjugate gradient algorithms.


65K05 Numerical mathematical programming methods
90C30 Nonlinear programming
Full Text: DOI


[1] Andrei, N., An acceleration of gradient descent algorithm with backtracking for unconstrained optimization, Numerical algorithms, 42, 63-73, (2006) · Zbl 1101.65058
[2] Andrei, N., Numerical comparison of conjugate gradient algorithms for unconstrained optimization, Studies in informatics and control, 16, 333-352, (2007)
[3] Andrei, N., Scaled conjugate gradient algorithms for unconstrained optimization, Computational optimization and applications, 38, 401-416, (2007) · Zbl 1168.90608
[4] Andrei, N., Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Optimization methods and software, 22, 561-571, (2007) · Zbl 1270.90068
[5] Andrei, N., A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Applied mathematics letters, 20, 645-650, (2007) · Zbl 1116.90114
[6] Andrei, N., An unconstrained optimization test functions collection, Advanced modeling and optimization, an electronic international journal, 10, 147-161, (2008) · Zbl 1161.90486
[7] Armijo, L., Minimization of functions having Lipschitz continuous first partial derivatives, Pacific journal of mathematics, 6, 1-3, (1966) · Zbl 0202.46105
[8] Bongartz, I.; Conn, A.R.; Gould, N.I.M.; Toint, P.L., CUTE: constrained and unconstrained testing environments, ACM transactions on mathematical software, 21, 123-160, (1995) · Zbl 0886.65058
[9] Boyd, S.; Vandenberghe, L., Convex optimization, (2004), Cambridge University Press · Zbl 1058.90049
[10] Dai, Y.H.; Liao, L.Z., New conjugacy conditions and related nonlinear conjugate gradient methods, Applied mathematics and optimization, 43, 87-101, (2001) · Zbl 0973.65050
[11] Dai, Y.H.; Yuan, Y., A nonlinear conjugate gradient method with a strong global convergence property, SIAM journal on optimization, 10, 177-182, (1999) · Zbl 0957.65061
[12] Dennis, J.E.; Schnabel, R.B., Numerical methods for unconstrained optimization and nonlinear equations, (1983), Prentice-Hall Englewood Cliffs, New Jersey · Zbl 0579.65058
[13] Dolan, E.; Moré, J.J., Benchmarking optimization software with performance profiles, Mathematical programming, 91, 201-213, (2002) · Zbl 1049.90004
[14] Fletcher, R., Practical methods of optimization, (1987), Wiley New York · Zbl 0905.65002
[15] Goldstein, A.A., On steepest descent, SIAM journal on control, 3, 147-151, (1965) · Zbl 0221.65094
[16] Hager, W.W.; Zhang, H., A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM journal on optimization, 16, 170-192, (2005) · Zbl 1093.90085
[17] Hager, W.W.; Zhang, H., A survey of nonlinear conjugate gradient methods, Pacific journal of optimization, 2, 35-58, (2006) · Zbl 1117.90048
[18] Lemaréchal, C., A view of line search, (), 59-78
[19] Liu, D.C.; Nocedal, J., On the limited memory BFGS method for large scale optimization, Mathematical programming, 45, 503-528, (1989) · Zbl 0696.90048
[20] J.J. Moré, D.J. Thuente, On line search algorithms with guaranteed sufficient decrease, Mathematics and Computer Science Division Preprint MCS-P153-0590, Argonne National Laboratory, Argonne, 1990.
[21] Polak, E.; Ribière, G., Note sur la convergence de directions conjuguée, Rev. francaise informat recherche operationelle, 3e année, 16, 35-43, (1969) · Zbl 0174.48001
[22] Polyak, B.T., The conjugate gradient method in extreme problems, USSR computational mathematics and mathematical physics, 9, 94-112, (1969) · Zbl 0229.49023
[23] Potra, F.A.; Shi, Y., Efficient line search algorithm for unconstrained optimization, Journal of optimization theory and applications, 85, 677-704, (1995) · Zbl 0831.90106
[24] Powell, M.J.D., Some global convergence properties of a variable-metric algorithm for minimization without exact searches, SIAM-AMS Proceedings, Philadelphia, 9, 53-72, (1976) · Zbl 0338.65038
[25] Shanno, D.F.; Phua, K.H., Algorithm 500, minimization of unconstrained multivariate functions, ACM transactions on mathematical software, 2, 87-94, (1976) · Zbl 0319.65042
[26] Wolfe, P., Convergence conditions for ascent methods, SIAM review, 11, 226-235, (1968) · Zbl 0177.20603
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.