Liuzzi, Giampaolo; Lucidi, Stefano; Sciandrone, Marco Sequential penalty derivative-free methods for nonlinear constrained optimization. (English) Zbl 1223.65045 SIAM J. Optim. 20, No. 5, 2614-2635 (2010). The problem of minimizing a continuously differentiable function of several variables subject to smooth nonlinear constraints is studied. It is assumed that the first order derivatives of the objective function and of the constraints can be neither calculated nor explicitly approximated. Hence, every minimization procedure must use only a suitable sampling of the problem functions. The aim of the paper is to extend to a derivative-free context a sequential penalty approach for nonlinear programming.The proposed approach consists in solving the original problem by a sequence of approximate minimizations of a merit function where penalization of constraint violation is progressively increased. A general theoretical result regarding the connections between the sampling technique and the updating of the penalization which are able to guarantee convergence to stationary points of the constrained problem is introduced. A new method is proposed on the basis of the general theoretical result and its convergence to stationary points of the constrained problem is proved. Test problems and a real application are used for the evaluation of the new computational method. Reviewer: Vasilis Dimitriou (Chania) Cited in 1 ReviewCited in 26 Documents MSC: 65K05 Numerical mathematical programming methods 90C30 Nonlinear programming 90C56 Derivative-free methods and methods using generalized derivatives Keywords:derivative-free optimization; nonlinear programming; sequential penalty functions; numerical example; sampling technique; convergence Software:NOMAD; SDPEN; OrthoMADS PDFBibTeX XMLCite \textit{G. Liuzzi} et al., SIAM J. Optim. 20, No. 5, 2614--2635 (2010; Zbl 1223.65045) Full Text: DOI Link