Dealing with degeneracy in reduced gradient algorithms. (English) Zbl 0573.90083

Many algorithms for linearly constrained optimization problems proceed by solving a sequence of subproblems. In these subproblems, the number of variables is implicitly reduced by using the linear constraints to express certain ’basic’ variables in terms of other variables. Difficulties may arise, however, if degeneracy is present; that is, if one or more basic variables are at lower or upper bounds. In this situation, arbitrarily small movements along a feasible search direction in the reduced problem may result in infeasibilities for basic variables in the original problem. For such cases, the search direction is typically discarded, a new reduced problem is formed and a new search direction is computed. Such a process may be extremely costly, particularly in large-scale optimization where degeneracy is likely and good search directions can be expensive to compute. This paper is concerned with a practical method for ensuring that directions that are computed in the reduced space are actually feasible in the original problem. It is based on a generalization of the ’maximal basis’ result first introduced by the authors [Math. Program. Study 15, 125-147 (1981; Zbl 0477.90025)] for large nonlinear network optimization problems.


90C30 Nonlinear programming
49M37 Numerical methods based on nonlinear programming
65K05 Numerical mathematical programming methods


Zbl 0477.90025
Full Text: DOI


[1] J. Abadie and J. Carpentier, ”Generalization of the Wolfe reduced gradient method to the case of nonlinear constraints”, in: R. Fletcher, ed.,Optimization (Academic Press, New York, 1969) pp. 37–47. · Zbl 0254.90049
[2] R.G. Bland, ”New finite pivoting rules for the simplex method”,Mathematics of Operations Research 2 (1977) 103–107. · Zbl 0408.90050
[3] R.S. Dembo, ”A primal truncated-Newton algorithm with application to large-scale nonlinear network optimization”, Working Paper No. 72, Series B, School of Organization and Management, Yale University (New Haven, CT, 1983).
[4] R.S. Dembo and J.G. Klincewicz, ”A scaled reduced gradient algorithm for network flow problems with convex separable costs”,Mathematical Programming Study 15 (1981) 125–147. · Zbl 0477.90025
[5] J.G. Klincewicz, ”A Newton method for convex separable network flow problems”,Networks 13 (1983) 427–442. · Zbl 0518.90016
[6] E.L. Lawler,Combinatorial optimization, networks and matroids (Holt, Rinehart & Winston, New York, 1976). · Zbl 0413.90040
[7] C.E. Lemke, ”The constrained gradient method of linear programming”,Journal of the SIAM 9 (1961), 1–17. · Zbl 0112.12302
[8] B.A. Murtagh and M.A. Saunders, ”Large scale linearly constrained optimization”,Mathematical Programming 14 (1978) 41–72. · Zbl 0383.90074
[9] D.F. Shanno and R.E. Marsten, ”Conjugate gradient methods for linearly constrained nonlinear programming”,Mathematical Programming Study 16 (1983) 149–161. · Zbl 0477.90062
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.