×

Collinear scaling and sequential estimation in sparse optimization algorithms. (English) Zbl 0508.49028


MSC:

49M15 Newton-type methods
65K05 Numerical mathematical programming methods
62L12 Sequential estimation
65F35 Numerical computation of matrix norms, conditioning, scaling

Software:

LINPACK
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] D. Bertsekas, ”Control of uncertain systems with a set membership description”, Ph.D. Thesis, Department of Electrical Engineering, Massachusetts Institute of Technology, Boston, MA (1971).
[2] C.G. Broyden, ”A class of methods for solving nonlinear simultaneous equations”, Mathematics of Computation 19 (1965) 577–593.
[3] C.G. Broyden, ”A new double-rank minimization algorithm”, Notices of the American Mathematical Society 16 (1969) 670.
[4] W.C. Davidon, ”Variable metric method for minimization”, Report ANL-5990 Rev., Argonne National Laboratories, Argonne, IL (1959).
[5] W.C. Davidon, ”Conic aproximations and collinear scalings for optimizers,” SIAM Journal on Numerical Analysis 17 (1980).
[6] J.E. Dennis and J.J. Moré, ”Quasi-Newton methods, motivation and theory”, SIAM Review 19 (1977) 46–89.
[7] J.E. Dennis and R.B. Schnabel, ”Least change secant updates for quasi-Newton methods”, SIAM Review 21 (1979) 443–459.
[8] J.J. Dongarra, C.B. Moler, J.R. Bunch and G.W. Stewart, LINPACK user’s guide, (SIAM Publications, Philadelphia, PA, 1979).
[9] R. Fletcher, ”A new approach to variable metric algorithms”, The Computer Journal 13 (1970) 313–322.
[10] R. Fletcher and M.J.D. Powell, ”A rapidly convergent descent method for minimization”, The Computer Journal 6 (1963) 163–168.
[11] D.M. Gay, ”Computing optimal locally constrained steps”, Mathematics Research Center Report #2000, University of Wisconsin, Madison, WI (1979).
[12] P.E. Gill and W. Murray, ”Newton type methods for unconstrained and linearly constrained optimization”, Mathematical Programming 7 (1974) 311–350.
[13] D. Goldfarb, ”A family of variable-metric methods derived by variational means”, Mathematics of Computation 24 (1970) 23–26.
[14] D. Goldfarb, ”Generating conjugate directions without line searches using factorized variable metric formulas”, Mathematical Programming 13 (1977) 94–110.
[15] R.E. Kalman, ”A new approach to linear, filtering and prediction problems”, AMSE Transactions. Journal of Basic Engineering 82D (1960) 35–45.
[16] R.E. Kalman, ”New methods in Wiener filtering theory”, in: J.L. Bogdanoff and F. Kozin, eds., Proceedings of the First Symposium on Engineering Applications of Random Function Theory and Probability (Wiley, New York, 1963), pp. 270–388.
[17] B. Lam, ”On the convergence of a quasi-Newton method for, sparse nonlinear systems”, Mathematics of Computation 32 (1978) 447–451.
[18] E.S. Marwill, ”Exploiting sparsity in Newton-like methods”, Ph.D. Thesis, Department of Computer Science Report TR 78-335 Cornell University, Ithaca, NY, (1978).
[19] E.S. Marwill, ”Convergence results for Schubert’s method for solving sparse nonlinear equations”, SIAM Journal of Numerical Analysis 16 (1979) 588–604.
[20] S.K. Mitter and P. Todalagi, ”Variable metric methods and filtering theory,” Department of Electrical Engineering and Computer Science Report, Massachusetts Institute of Technology, Boston, MA (1979).
[21] J.J. Moré, ”The Levenberg-Marquardt algorithm: Implementation and theory”, in: G.A. Watson, ed., Numerical analysis, Lecture Notes in Mathematics 630, Proceedings biennial conference, Dundee, 1977, (Springer, Heidelberg, 1978) pp. 105–116.
[22] J.M. Ortega and W.C. Rheinboldt, Iterative solution of nonlinear equations in several variables (Academic Press, New York, 1970).
[23] M.J.D. Powell, ”Convergence properties of a class of minimization algorithms”, in: O.L. Mangasarian, R.R. Meyer, and S.M. Robinson, eds. Nonlinear programming 2 (Academic Press, New York, 1975), pp. 1–27.
[24] M.J.D. Powell, ”Quasi-Newton formulae for sparse second derivative matrices”, Internal Report DAMTP 1979/Na7 department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, England (1979).
[25] L.K. Schubert, ”Modification of a quasi-Newton method for nonlinear equations with a sparse Jacobian”, Mathematics of Computation 24 (1970) 27–30.
[26] F.C. Schweppe, ”Recursive state estimation: Unknown but bounded errors and system input”, IEEE Transactions on Automatic Control AC-13 (1968).
[27] D.F. Shanno, ”Conditioning of quasi-Newton methods for function minimization”, Mathematics of Computation 24 (1970) 647–656.
[28] D.F. Shanno, ”On variable metric methods for sparse Hessians”, Mathematics of Computation 834 (1980) 499–514.
[29] D.C. Sorensen, ”The Q-superlinear convergence of a collinear scaling algorithm for unconstrained optimization”, SIAM Journal on Numerical Analysis 17 (1980) 84–114.
[30] D.C. Sorensen, ”Newton’s method with a model trust-region modification”, SIAM Journal on Numerical Analysis 16 (1982).
[31] G.W. Stewart, Introduction matrix computations (Academic Press, New York, 1973).
[32] M.N. Thapa, ”Optimization of unconstrained with sparse Hessian matrices”, Ph.D. Thesis, Department of Operations Research, Stanford University, Stanford, CA (1980).
[33] S.W. Thomas, ”Sequential estimation techniques fr quasi-Newton algorithms”, Ph.D. Thesis, Department of Computer Science Report TR 75-227, Cornell University, Ithaca, NY (1975).
[34] Ph.L. Toint, ”On sparse and symmetric updating subject to a linear equation”, Mathematics of Computation 31 (1977) 954–961.
[35] Ph.L. Toint, ”On the superlinear convergence of an algorithm for solving sparse minimization problems”, SIAM Journal on Numerical Analysis 16 (1979) 1036–1045.
[36] Ph.L. Toint, ”A note about sparsity exploiting quasi-Newton updates”, Department of Mathematics Report 7915, Facultes Universitaires de Namur, Namur, Belgium (1979).
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.