# zbMATH — the first resource for mathematics

The quadratic approximation in nonlinear extremum problems. (English. Russian original) Zbl 0815.90125
Phys.-Dokl. 38, No. 8, 320-322 (1993); translation from Dokl. Akad. Nauk 331, No. 4, 425-427 (1993).
The overwhelming majority of optimization algorithms admit the following interpretation. Let $$f: X\to \mathbb{R}$$ be a minimizing function, where $$X$$ is some space. A model of the function $$f$$ imitating its behavior “near” $$\bar x$$ is associated with the current state $$\bar x\in X$$. The step size of the extremum search – the size of the displacement from the state $$\bar x$$ – is determined by replacing the original problem by the auxiliary problem of minimization with respect to the model. For example, if $$X= \mathbb{R}^ n$$, gradient procedures correspond to linear models, the Newtonian procedures correspond to quadratic models.
Since it makes sense to use a model only where it approximates a function well, one of the most important concepts (and objects of study) in present-day mathematical programming is the confidence region, within which the solution of the model problem is sought. Therefore, the extremum search consists of two basic steps: determination of the displacement with respect to the model from the current state $$\bar x$$ and improvement (renewal) of the confidence region relative to this state. Examples show clearly that the source of many difficulties in the confidence-region method is mismatch between the confidence region and the model. Therefore, in our opinion one of the central problems in the construction of extremum search schemes is that of the “correct” modeling of the optimizing function, i.e., the ability to associate with it “ideal” model-confidence-region pairs such that the “model geometry” agrees with the geometry of the confidence region. In the situations studied, for “correct” modeling the confidence-region method gives the same direction of motion as the gradient or Newton procedure, which in this case coincide.
This article is devoted to the accurate formulation of these considerations in the case of the quadratic approximation of the optimizing function. Instead of confidence regions specified by quadratic constraints, we shall consider normally distributed random vectors, so that an approximation in a region is replaced by an approximation of the relative probability distribution. This modification is more convenient for mathematical study.
Our construction can be interpreted as the construction of an analog of the Newton method (more precisely, of a variant of it: the confidence- region method) for a randomized extremum search in the class of normal distributions. The main result is the existence of “ideal” model- distribution pairs exactly corresponding to matched model-confidence- region pairs.
##### MSC:
 90C30 Nonlinear programming