# zbMATH — the first resource for mathematics

Gradient methods for nonlocal optimization using potential theory. (English. Russian original) Zbl 0837.90109
Phys.-Dokl. 38, No. 5, 178-180 (1993); translation from Dokl. Akad. Nauk, Ross. Akad. Nauk 330, No. 2, 170-172 (1993).
A new approach for unconstrained optimization of nonconvex nonsmooth functions is suggested. The approach is based on replacing the original problem of minimizing $$f$$ in $$\mathbb{R}^n$$ by the equivalent randomized problem of minimizing the functional $$\int_{\mathbb{R}^n} f(x) p(x)dx$$ over all density functions $$p$$. The variation of the randomized functional along a random vector $$Y$$ is given. Maximization of this variation over all $$Y$$ determines the gradient direction. This direction is characterized by a mean vector field. To make this characterization constructive it is necessary to partition the mean vector field in a scalar potential field and a nondivergent vector field. The corresponding potential function $$\phi$$ satisfies the Poisson equation $$\Delta\phi(x)= -(f(x)- c)q(x)$$, where $$q$$ is a controlled density function, $$c$$ is the mean value of $$f$$ over $$q$$. The potential $$\phi$$ serves as a model of extremal properties of $$f$$. The first and the second order methods (with respect to $$\phi$$) can be developed for minimizing nondifferentiable nonconvex functions $$f$$ [see the authors, Autom. Remote Control 54, No 7, pt. 1, 1077-1086 (1993; Zbl 0830.90126)]. The potential can be used as a Lyapunov function for convergence analysis of these methods [Autom. Remote Control No. 9-11 (1994)].
Reviewer: A.Propoj (Moskva)
##### MSC:
 90C30 Nonlinear programming 49J52 Nonsmooth analysis