## Nonlinear programming: sequential unconstrained minimization techniques.(English)Zbl 0193.18805

Research Analysis Corporation, Research Series. New York-London-Sydney-Toronto: John Wiley and Sons, Inc. xiv, 210 p. (1968).
This book is intended to provide a unified theory on methods for transforming a nonlinear programming problem into a sequence of unconstrained minimizations of the appropriately modified objective function.
The first chapter gives a detailed historical survey of sequential unconstrained methods. The second chapter presents a basic mathematical programming theory in the Euclidean space $$E^n$$. Besides known results on first-order local optimality criteria, including of course the Kuhn-Tucker theorem, there are also in this chapter several new results of the authors on second-order criteria (i.e. criteria involving second derivatives).
Chapter 3 deals with the so-called interior point method which consists in replacing the problem of concern: $(\text{B})\quad \text{Minimize }f(x), \text{subject to }g_i(x) \ge 0 \quad (i = 1,2, \ldots, m)$ by a sequence of unconstrained problems of the form $(\text{B}_\text{k})\quad \text{Minimize }f(x)+s(r_k)I(x),$ where $$r_k$$ is a positive parameter, $$s(r)$$ is a decreasing function of $$r$$ such that $$\lim_{r\to 0} s(r) = 0$$, $$I(x)$$ is a function continuous in the interior $$R^n$$ of the constrained set $$R = \{x\mid g_i(x)\ge 0\ (i =1,2, \ldots,m)\}$$ and such that $$\lim_{k\to\infty} I(x^k) = \infty$$ for any infinite sequence $$\{x_k\}\subset R^0$$, converging to a point on $$R\backslash R^0$$ (for instance, one may take $$s(r^k)I(x) = -r_k \sum_{i=1}^n \ln g_i(x))$$. Starting from a point $$x^0\in R^0$$ one proceeds to a local solution $$x^1$$ of $$(B_1)$$, then, starting from $$x^1$$ one proceeds to a local solution $$x^1$$ of $$(B_1)$$ with $$r_2<r_1$$, and so on. Intuitively the penalty term $$s(r_k)I(x)$$ keeps $$x^k$$ in $$R^0$$ and it is proved that under appropriate assumptions, the sequence $$x^k$$ exists and, as $$r_k\to 0$$, converges to a local solution to the original problem.
In chapter 4 exterior point method and mixed method are described. The former consists in replacing the original problem by a sequence of unconstrained problems of the form: $\text{Minimize }f(x)+p(t_k)O(x),$ where $$t_k$$ is a positive parameter, $$p(t)$$ is an increasing function of $$t$$ such that $$\lim_{t\to\infty} p(t) = \infty$$, $$O(x)$$ is a continuous function equal to $$0$$ for $$x\in R$$ and positive for $$x\notin R$$ (for example, one may take $$p(t)O(x) = t \sum_{i=1}^n \{[g_i(x) - \vert g_i(x)\vert ]/2\}^2)$$.
The latter consists in replacing the original problem by a sequence of unconstrained problems of the form: $\text{Minimize }f(x)+s(r_k)I(x) + p(t_k)O(x),$ where $$I(x)$$ is defined relative to the set of constraints which we want to be strictly satisfied at all points of our algorithm, and $$O(x)$$ is defined relative to the remaining constraints.
Chapter 5 is devoted to the analysis of trajectories of local unconstrained minimax converging to single-point sets of constrained minima under various assumptions. It is pointed out that the theorems established here may be used to estimate the solution and thereby accelerate convergence.
Chapter 6 deals with convex programming. In this case, a local solution is a global one, and the interior point method ensures that $$f(x^k)$$ decreases monotonically as $$k\to\infty$$, whereas for the exterior point technique it is generally possible to construct a sequence $$x^k$$ of feasible points such that $$f(x^k)$$ will converge to the optimum from the above. Furthermore, on the basis of duality theory, upper and lower bounds on the optimum value may be calculated at each step.
In Chapter 7 several variations of the basic methods are discussed and other sequential unconstrained techniques presented. Among these let us mention the dual method of Falk for convex problems, the generalized Lagrange multiplier technique of Everett, an exterior point algorithm of Zangwill, the gradient projection method of Rosen, and the authors’ $$Q$$-function type method. The latter consists in solving, in place of the original problem, the sequence of unconstrained problems: $\text{minimize }Q^k(x) = Q[-f(x)+f(x^{k-1}),g_1(x),\ldots,g_n(x)]$ where $$x^{k-1}$$ is the point resulted from the preceding step $$(x^0$$ being an arbitrary point in the interior of the constrained set), $$Q(z_0,z_1,\ldots,z_n)$$ is a function of $$z=(z_0,z_1,\ldots,z_n)$$, continuous in the set $$R' = \{z_0>0, z_1>0,\ldots,z_n>0\}$$ and such that $$\liminf_{h\to\infty} Q(z^h) > \sup_{z\in R'} Q(z)$$ for any infinite sequence $$\{z^h\} \subset R'$$, converging to a point $$\bar z\notin R'$$ (for example, $$Q^k(x)= \frac1{f(x^{k-1}-x}+ \sum_{i=1}^n \frac1{g_i(x)})$$.
The last chapter deals with algorithms for minimizing an unconstrained function. Some of the most promising current techniques are reviewed and three new methods proposed, among which the most effective, in the author’s experience, seems to be the so-called modified Newton method. Also other computational questions implicit in the development of unconstrained techniques are discussed: convex problems with special structure, acceleration by extrapolation, convergence criteria, finding an interior point of the constrained set.
In the whole the book provides a comprehensive reference for the theory and computational implementation of sequential unconstrained methods in nonlinear programming. In addition it contains a concise reference for mathematical programming theory and some of the most recent advances in the theory and implementation of methods for unconstrained minimization. However, only works published in the Occident are mentioned in this book.
Another remark worthwhile to make is that some proofs might be made much simpler. For example, corollary 2, page 21, might be proved in a shorter way by making use of the implicit function theorem; the proof given for theorem 7, page 47, is defective, whereas this theorem is nearly obvious: in fact, if $$V(x)$$ denotes the compact set corresponding to each $$x\in A^*$$ by the first definition in page 46, and $$E$$ denotes the closed set corresponding to $$A^*$$ by the third definition, then, because of the compactness of $$A^*$$, we have $$A^*\subset V(x^{i_1}) \cup \cdots\cup V(x^{i_N})$$ for some $$x^{i_1}, \ldots,x^{i_N}\in A^*$$, and it suffices to take $$S = E \cap [V(x^{i_1}) \cup \cdots\cup V(x^{i_N})]$$.
Reviewer: Hoang Tuy (Hanoi)
Show Scanned Page ### MSC:

 90C30 Nonlinear programming 90-02 Research exposition (monographs, survey articles) pertaining to operations research and mathematical programming

### Keywords:

operations research