×

A class of gradient unconstrained minimization algorithms with adaptive stepsize. (English) Zbl 0958.65072

The authors present a class of gradient algorithms with adaptive stepsize for the unconstrained minimization problem \[ f(x)\to \underset{x\in \mathbb{R}^n} {\text{minimum}}. \] The proposed class of algorithms comprises four algorithm: the first two incorporate techniques for the adaptation of a common stepsize for all coordinate directions and the other two allow an individual adaptive stepsize along each coordinate direction.
Numerical tests are given.

MSC:

65K05 Numerical mathematical programming methods
90C30 Nonlinear programming

Software:

OPTAC; minpack; BRENT
PDF BibTeX XML Cite
Full Text: DOI

References:

[1] Androulakis, G. S.; Vrahatis, M. N., OPTACa portable software package for analyzing and comparing optimization methods by visualization, J. Comput. Appl. Math., 72, 41-62 (1996) · Zbl 0857.65066
[2] Androulakis, G. S.; Vrahatis, M. N.; Grapsa, T. N., Studying the performance of optimization methods by visualization, Systems Anal. Model. Simulation, 25, 21-42 (1996) · Zbl 0933.90061
[3] Armijo, L., Minimization of function having Lipschitz continuous first partial derivatives, Pacific J. Math., 16, 1-3 (1966) · Zbl 0202.46105
[5] Blum, E. K., Approximation of Boolean functions by sigmoidal networkspart I: XOR and other two-variable functions, Neural Comput., 1, 532-540 (1989)
[7] Brewster, M. E.; Kannan, R., Nonlinear successive over-relaxation, Numer. Math., 44, 309-315 (1984) · Zbl 0566.65045
[10] Curry, H. B., The method of steepest descent for non-linear minimization problems, Quart. Appl. Math., 2, 258-261 (1944) · Zbl 0061.26801
[11] Cauchy, A., Méthode générale pour la résolution des systèmes d’équations simultanées, Comp. Rend. Acad. Sci. Paris, 25, 536-538 (1847)
[14] Fletcher, R.; Reeves, C., Function minimization by conjugate gradients, Comput. J., 7, 149-154 (1964) · Zbl 0132.11701
[15] Gilbert, J. C.; Nocedal, J., Global convergence properties of conjugate gradient methods for optimization, SIAM J. Optim., 2, 21-42 (1992) · Zbl 0767.90082
[16] Goldstein, A. A., Cauchy’s method of minimization, Numer. Math., 4, 146-150 (1962) · Zbl 0105.10201
[17] Goldstein, A. A., On steepest descent, SIAM J. Control, 3, 147-151 (1965) · Zbl 0221.65094
[18] Goldstein, A. A.; Price, J. F., An effective algorithm for minimization, Numer. Math., 10, 184-189 (1967) · Zbl 0161.35402
[19] Grapsa, T. N.; Vrahatis, M. N., A dimension-reducing method for unconstrained optimization, J. Comput. Appl. Math., 66, 239-253 (1996) · Zbl 0856.65074
[21] Haralick, R.; Shanmugan, K.; Dinstein, I., Textural features for image classification, IEEE Trans. System, Man Cybernet., 3, 610-621 (1973)
[23] Heller, D., A survey of parallel algorithms in numerical linear algebra, SIAM Rev., 20, 740-777 (1978) · Zbl 0408.68033
[24] Jacobs, R. A., Increased rates of convergence through learning rate adaptation, Neural Networks, 1, 295-307 (1988)
[26] Magoulas, G. D.; Vrahatis, M. N.; Androulakis, G. S., Effective backpropagation training with variable stepsize, Neural Networks, 10, 69-82 (1997) · Zbl 0891.68090
[28] Moré, B. J.; Garbow, B. S.; Hillstrom, K. E., Testing unconstrained optimization, ACM Trans. Math. Software, 7, 17-41 (1981) · Zbl 0454.65049
[29] Nocedal, J., Theory of algorithms for unconstrained optimization, Acta Numerica, 1, 199-242 (1992) · Zbl 0766.65051
[30] Ohanian, P. P.; Dubes, R. C., Performance evaluation for four classes of textural features, Pattern Recognition, 25, 819-833 (1992)
[32] Ortega, J. M.; Voigt, R. G., Solution of partial differential equations on vector and parallel computers, SIAM Rev., 27, 149-270 (1985) · Zbl 0644.65075
[35] Powell, M. J.D., Direct search algorithms for optimization calculations, Acta Numerica, 7, 287-336 (1998) · Zbl 0911.65050
[38] Sperduti, A.; Starita, A., Speed up learning and network optimization with extended back-propagation, Neural Networks, 6, 365-383 (1993)
[40] Strang, J.; Taxt, T., Local frequency features for texture classification, Pattern Recognition, 27, 1397-1406 (1994)
[42] Vogl, T. P.; Mangis, J. K.; Rigler, A. K.; Zink, W. T.; Alkon, D. L., Accelerating the convergence of the back-propagation method, Biol. Cybernet., 59, 257-263 (1988)
[43] Voigt, R. G., Rates of convergence for a class of iterative procedures, SIAM J. Numer. Anal., 8, 127-134 (1971) · Zbl 0232.65044
[44] Vrahatis, M. N.; Androulakis, G. S.; Manoussakis, G. E., A new unconstrained optimization method for imprecise function and gradient values, J. Math. Anal. Appl., 197, 586-607 (1996) · Zbl 0887.90166
[45] Wessel, L. F.; Barnard, E., Avoiding false local minima by proper initialization of connections, IEEE Trans. Neural Networks, 3, 899-905 (1992)
[46] Wolfe, P., Convergence conditions for ascent methods, SIAM Rev., 11, 226-235 (1969) · Zbl 0177.20603
[47] Wolfe, P., Convergence conditions for ascent methods IIsome corrections, SIAM Rev., 13, 185-188 (1971) · Zbl 0216.26901
[48] Young, D., Iterative methods for solving partial difference equations of elliptic type, Trans. Amer. Math. Soc., 76, 92-111 (1954) · Zbl 0055.35704
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.