Trust-region methods.

*(English)*Zbl 0958.65071
MPS/SIAM Series on Optimization 1. Philadelphia, PA: SIAM, Society for Industrial and Applied Mathematics; MPS, Mathematical Programming Society (ISBN 0-89871-460-5). xix, 959 p. (2000).

This interesting and very complete book treats of trust-region methods for optimization. Recall quickly the definition of a basic trust-region. At each iterate \(x_k\), we first define a model \(m_k(x)\) whose purpose is to approximate the objective function within a suitable neighbourhood of \(x_k\), which we refer to as the trust region. The trust region is the set of points:
\[
B_k= \{x\in \mathbb{R}^n/\|x- x_k\|\leq \Delta_k\},
\]
where \(\Delta_k\) is the trust region radius. A trial point \(x_k+ s_k\) is next sought with the aim of reducing the model, while satisfying \(\|s_k\|\leq \Delta_k\). We ask that the reduction in the model at the trial point bears some relationship to the value \(\nabla_xf(x_k)\) and say that any step which achieves this gives a sufficient reduction of the model. The objective of these methods is to find a local solution of the problem
\[
\underset{x\in \mathbb{R}^n} {\text{Minimize}} f(x).
\]
Having determined the trial step, the objective function is computed at \(x_k+ s_k\) and compared to the value predicted by the model at this point, that is \(m_k(x_k+ s_k)\). If the sufficient reduction predicted by the model is realized by the objective function at this point is realized by the objective function the trial point is accepted as the next iterate and the trust region is expanded or kept the same.

The authors consider constrained and unconstrained optimization problems for smooth and nonsmooth functionals. The main tools in functional analysis are first recalled as well as the classical optimization methods such as the conjugate gradient method.

The main chapters are:

– trust-region methods for unconstrained optimization;

– trust-region methods for constrained optimization with convex constraints;

– trust-region methods for general constrained optimization and systems of nonlinear equations (penalty function methods, sequential quadratic programming methods, nonlinear fitting).

A very complete and annotated bibliography ends this passionating book. I only regret that global optimization methods are not described in this work. I warmly recommend this book to searchers and engineers confronted to concrete optimization problems.

The authors consider constrained and unconstrained optimization problems for smooth and nonsmooth functionals. The main tools in functional analysis are first recalled as well as the classical optimization methods such as the conjugate gradient method.

The main chapters are:

– trust-region methods for unconstrained optimization;

– trust-region methods for constrained optimization with convex constraints;

– trust-region methods for general constrained optimization and systems of nonlinear equations (penalty function methods, sequential quadratic programming methods, nonlinear fitting).

A very complete and annotated bibliography ends this passionating book. I only regret that global optimization methods are not described in this work. I warmly recommend this book to searchers and engineers confronted to concrete optimization problems.

Reviewer: Yves Cherruault (Paris)

##### MSC:

65K05 | Numerical mathematical programming methods |

65-02 | Research exposition (monographs, survey articles) pertaining to numerical analysis |

90C30 | Nonlinear programming |

90-02 | Research exposition (monographs, survey articles) pertaining to operations research and mathematical programming |

90C55 | Methods of successive quadratic programming type |