×

Sequential penalty derivative-free methods for nonlinear constrained optimization. (English) Zbl 1223.65045

The problem of minimizing a continuously differentiable function of several variables subject to smooth nonlinear constraints is studied. It is assumed that the first order derivatives of the objective function and of the constraints can be neither calculated nor explicitly approximated. Hence, every minimization procedure must use only a suitable sampling of the problem functions. The aim of the paper is to extend to a derivative-free context a sequential penalty approach for nonlinear programming.
The proposed approach consists in solving the original problem by a sequence of approximate minimizations of a merit function where penalization of constraint violation is progressively increased. A general theoretical result regarding the connections between the sampling technique and the updating of the penalization which are able to guarantee convergence to stationary points of the constrained problem is introduced. A new method is proposed on the basis of the general theoretical result and its convergence to stationary points of the constrained problem is proved. Test problems and a real application are used for the evaluation of the new computational method.

MSC:

65K05 Numerical mathematical programming methods
90C30 Nonlinear programming
90C56 Derivative-free methods and methods using generalized derivatives

Software:

NOMAD; SDPEN; OrthoMADS
PDFBibTeX XMLCite
Full Text: DOI Link