Derivative-free optimization methods. (English) Zbl 1461.65169

The paper gives a “survey of methods for derivative-free optimization and key results for their analysis” to determine a local minimum of mathematical programming problems. Global optimization and multi-objective optimization, beside some special exceptions in Section 8, are not under extensive consideration. Hints to associated reviews are given. References to applications, beside a link to some book, are not mentioned. “At the persistent page https://archive.org/services/purl/dfomethods we intend to link all works that cite the entries in our bibliography and those that cite this survey.” The link contains all bibtex sources and in much cases an available PDF-source of papers via DOI. Most of the references are from the 90th till now and over half of them are from the past ten years. There are about 340 references. In order to apply a mentioned method it is ever necessary to consult suitable given references. The contents is as follows:
Introduction, p. 288
Deterministic methods for deterministic objectives, p. 293
Randomized methods for deterministic objectives, p. 318
Methods for convex objectives, p. 324
Methods for structured objectives, p. 336
Methods for stochastic optimization, p. 344
Methods for constrained optimization, p. 354
Other extensions and practical considerations, p. 369
Appendix: Collection of Worst Case Complexity analysis (WCC) results, p. 376
References, pp. 380-404

In the introduction the main purpose and the restrictions of the paper are explained. In Section 2 the main search methods are discussed by presenting the main principal algorithms (Simplex of Nelder Mead, direct directional search (DDS), Descent test etc., generalized patter search (GPS), mesh adaptive direct search (MADS), polynomial models, quadratic interpolation models, radial basis function interpolation models (RBF), model based methods like Trust Region (TR), Filtering methods). The section is essential for understanding all further sections.
In all the following sections, new notions are defined by formulas with corresponding explanations. From Section 2 till Section 6, presented methods are mainly without formulas discussed w.r.t. the performance, the convergence, existing counterexamples and the complexity (WCC), ever highlighted by cited references. In the appendix the formulas of some of the WCC are given explicitly. Section 3 deals with random search, randomized direct-search and randomized TR methods. In Section 4 deterministic objectives (DSS with better WCC) and stochastic objectives (bandit feedback, convex expected value as objective) are considered. In Section 5 nonlinear least squares, composite non-smooth optimization, bilevel and general minimax problems are covered. The stochastic problems in Section 6 possess again an expected value as objective but now nonconvex.
In Section 7 the former unconstrained methods are extended to constrained optimization using penalty, augmented Lagrangian, SQP with TR models. The kind of the constraints is relaxable or unrelaxable algebraic, simulation based, modelled or unrelaxable discret. The discussion is more general again with a lot of references but mainly without convergence results and without WCC. Modelling is the main task. In Section 8 some extensions and practical considerations are given for special cases of multi-objective optimization (Pareto optimality), global optimization (Lipschitz under estimators, multistart methods) and multi fidelity in the focus of extensions of former presented methods and in the focus of future developments.


65K05 Numerical mathematical programming methods
90C56 Derivative-free methods and methods using generalized derivatives
90-02 Research exposition (monographs, survey articles) pertaining to operations research and mathematical programming
90-08 Computational methods for problems pertaining to operations research and mathematical programming
Full Text: DOI arXiv


[1] Abramson, M. A., Second-order behavior of pattern search, SIAM J. Optim., 16, 515-530, (2005) · Zbl 1122.90096
[2] Abramson, M. A. and Audet, C. (2006), ‘Convergence of mesh adaptive direct search to second-order stationary points’, SIAM J. Optim.17, 606-619. · Zbl 1174.90877
[3] Abramson, M. A., Audet, C. and Dennis, J. E. Jr (2004), ‘Generalized pattern searches with derivative information’, Math. Program.100, 3-25. · Zbl 1146.90524
[4] Abramson, M. A., Audet, C., Chrissis, J. and Walston, J. (2009a), ‘Mesh adaptive direct search algorithms for mixed variable optimization’, Optim. Lett.3, 35-47. · Zbl 1154.90551
[5] Abramson, M. A., Audet, C., Dennis, J. E. Jr and Le Digabel, S. (2009b), ‘OrthoMADS: A deterministic MADS instance with orthogonal directions’, SIAM J. Optim.20, 948-966. · Zbl 1189.90202
[6] Abramson, M. A., Frimannslund, L. and Steihaug, T. (2013), ‘A subclass of generating set search with convergence to second-order stationary points’, Optim. Methods Software29, 900-918. · Zbl 1298.90101
[7] Abramson, M., Audet, C. and Dennis, J. E. Jr (2007), ‘Filter pattern search algorithms for mixed variable constrained optimization problems’, Pacific J. Optim.3, 477-500. · Zbl 1138.65043
[8] Agarwal, A., Dekel, O. and Xiao, L. (2010), Optimal algorithms for online convex optimization with multi-point bandit feedback. In 23rd Conference on Learning Theory (COLT 2010) (Kalai, A. T. and Mohri, M., eds), pp. 28-40.
[9] Agarwal, A., Foster, D. P., Hsu, D. J., Kakade, S. M. and Rakhlin, A. (2011), Stochastic convex optimization with bandit feedback. In Advances in Neural Information Processing Systems 24 (Shawe-Taylor, J.et al., eds), Curran Associates, pp. 1035-1043. · Zbl 1270.90107
[10] Agarwal, A., Wainwright, M. J., Bartlett, P. L. and Ravikumar, P. K. (2009), Information-theoretic lower bounds on the oracle complexity of convex optimization. In Advances in Neural Information Processing Systems 22 (Bengio, Y.et al., eds), Curran Associates, pp. 1-9. · Zbl 1365.94132
[11] Agrawal, R., Sample mean based index policies with \(O(\log n)\) regret for the multi-armed bandit problem, Adv. Appl. Probab., 27, 1054-1078, (1995) · Zbl 0840.90129
[12] Alarie, S., Amaioua, N., Audet, C., Le Digabel, S. and Leclaire, L.-A. (2018), Selection of variables in parallel space decomposition for the mesh adaptive direct search algorithm. Technical report, Cahier du GERAD G-2018-38, GERAD.
[13] Alexandrov, N., Dennis, J. E. Jr, Lewis, R. M. and Torczon, V. (1998), ‘A trust region framework for managing the use of approximation models in optimization’, Struct. Optim.15, 16-23.
[14] Amaioua, N., Audet, C., Conn, A. R. and Le Digabel, S. (2018), ‘Efficient solution of quadratically constrained quadratic subproblems within the mesh adaptive direct search algorithm’, European J. Oper. Res.268, 13-24. · Zbl 1403.90618
[15] Amaran, S., Sahinidis, N. V., Sharda, B. and Bury, S. J. (2015), ‘Simulation optimization: A review of algorithms and applications’, Ann. Oper. Res.240, 351-380. · Zbl 1342.90113
[16] Anderson, H. L., Scientific uses of the MANIAC, J. Statist. Phys., 43, 731-748, (1986)
[17] Arouxét, M. B., Echebest, N. and Pilotta, E. A. (2011), ‘Active-set strategy in Powell’s method for optimization without derivatives’, Comput. Appl. Math.30, 171-196. · Zbl 1215.90046
[18] Audet, C., Convergence results for generalized pattern search algorithms are tight, Optim. Engng, 5, 101-122, (2004) · Zbl 1085.90053
[19] Audet, C. (2014), A survey on direct search methods for blackbox optimization and their applications. In Mathematics Without Boundaries: Surveys in Interdisciplinary Research (Pardalos, P. M. and Rassias, T. M., eds), Springer, pp. 31-56. · Zbl 1321.90125
[20] Audet, C. and Dennis, J. E. Jr (2000), ‘Pattern search algorithms for mixed variable programming’, SIAM J. Optim.11, 573-594. · Zbl 1035.90048
[21] Audet, C. and Dennis, J. E. Jr (2002), ‘Analysis of generalized pattern searches’, SIAM J. Optim.13, 889-903. · Zbl 1053.90118
[22] Audet, C. and Dennis, J. E. Jr (2004), ‘A pattern search filter method for nonlinear programming without derivatives’, SIAM J. Optim.14, 980-1010. · Zbl 1073.90066
[23] Audet, C. and Dennis, J. E. Jr (2006), ‘Mesh adaptive direct search algorithms for constrained optimization’, SIAM J. Optim.17, 188-217. · Zbl 1112.90078
[24] Audet, C. and Dennis, J. E. Jr (2009), ‘A progressive barrier for derivative-free nonlinear programming’, SIAM J. Optim.20, 445-472. · Zbl 1187.90266
[25] Audet, C. and Hare, W. L. (2017), Derivative-Free and Blackbox Optimization, Springer. · Zbl 1391.90001
[26] Audet, C., Bigeon, J., Cartier, D., Le Digabel, S. and Salomon, L. (2018a), Performance indicators in multiobjective optimization. Technical report, Cahier du GERAD G-2018-90, GERAD.
[27] Audet, C., Dennis, J. E. Jr and Le Digabel, S. (2008a), ‘Parallel space decomposition of the mesh adaptive direct search algorithm’, SIAM J. Optim.19, 1150-1170. · Zbl 1180.90363
[28] Audet, C., Ianni, A., Le Digabel, S. and Tribes, C. (2014), ‘Reducing the number of function evaluations in mesh adaptive direct search algorithms’, SIAM J. Optim.24, 621-642. · Zbl 1297.90149
[29] Audet, C., Ihaddadene, A., Le Digabel, S. and Tribes, C. (2018b), ‘Robust optimization of noisy blackbox problems using the mesh adaptive direct search algorithm’, Optim. Lett.12, 675-689. · Zbl 1403.90619
[30] Audet, C., Le Digabel, S. and Peyrega, M. (2015), ‘Linear equalities in blackbox optimization’, Comput. Optim. Appl.61, 1-23. · Zbl 1311.90185
[31] Audet, C., Savard, G. and Zghal, W. (2008b), ‘Multiobjective optimization through a series of single-objective formulations’, SIAM J. Optim.19, 188-210. · Zbl 1167.90020
[32] Audet, C., Savard, G. and Zghal, W. (2010), ‘A mesh adaptive direct search algorithm for multiobjective optimization’, European J. Oper. Res.204, 545-556. · Zbl 1181.90137
[33] Auer, P., Using confidence bounds for exploitation-exploration trade-offs, J. Mach. Learn. Res., 3, 397-422, (2002) · Zbl 1084.68543
[34] Auer, P., Cesa-Bianchi, N. and Fischer, P. (2002), ‘Finite-time analysis of the multiarmed bandit problem’, Machine Learning47, 235-256. · Zbl 1012.68093
[35] Auer, P., Cesa-Bianchi, N., Freund, Y. and Schapire, R. E. (2003), ‘The nonstochastic multiarmed bandit problem’, SIAM J. Comput.32, 48-77. · Zbl 1029.68087
[36] Auger, A., Hansen, N., Perez Zerpa, J., Ros, R. and Schoenauer, M. (2009), Experimental comparisons of derivative free optimization algorithms. In Experimental Algorithms (SEA 2009) (Vahrenhold, J., ed.), , Springer, pp. 3-15.
[37] Augustin, F. and Marzouk, Y. M. (2014), NOWPAC: A provably convergent derivative-free nonlinear optimizer with path-augmented constraints. arXiv:1403.1931
[38] Augustin, F. and Marzouk, Y. M. (2017), A trust-region method for derivative-free nonlinear constrained stochastic optimization. arXiv:1703.04156
[39] Avriel, M. and Wilde, D. J. (1967), ‘Optimal condenser design by geometric programming’, Ind. Engng Chem. Process Des. Dev.6, 256-263.
[40] Bach, F. and Perchet, V. (2016), Highly-smooth zero-th order online optimization. In 29th Annual Conference on Learning Theory (COLT 2016) (Feldman, V.et al., eds). Proc. Mach. Learn. Res.49, 257-283.
[41] Bagirov, A. M. and Ugon, J. (2006), ‘Piecewise partially separable functions and a derivative-free algorithm for large scale nonsmooth optimization’, J. Global Optim.35, 163-195. · Zbl 1136.90515
[42] Bagirov, A. M., Karasözen, B. and Sezer, M. (2007), ‘Discrete gradient method: Derivative-free method for nonsmooth optimization’, J. Optim. Theory Appl.137, 317-334. · Zbl 1165.90021
[43] Bajaj, I., Iyer, S. S. and Hasan, M. M. F. (2018), ‘A trust region-based two phase algorithm for constrained black-box and grey-box optimization with infeasible initial point’, Comput. Chem. Engng116, 306-321.
[44] Balaprakash, P., Salim, M., Uram, T. D., Vishwanath, V. and Wild, S. M. (2018), DeepHyper: Asynchronous hyperparameter search for deep neural networks. In 25th IEEE International Conference on High Performance Computing, Data, and Analytics (HiPC18). doi:10.1007/s10589-019-00063-3
[45] Balasubramanian, K. and Ghadimi, S. (2018), Zeroth-order (non)-convex stochastic optimization via conditional gradient and gradient updates. In Advances in Neural Information Processing Systems 31 (Bengio, S.et al., eds), Curran Associates, pp. 3459-3468.
[46] Balasubramanian, K. and Ghadimi, S. (2019), Zeroth-order nonconvex stochastic optimization: Handling constraints, high-dimensionality, and saddle-points. arXiv:1809.06474
[47] Bandeira, A. S., Scheinberg, K. and Vicente, L. N. (2012), ‘Computation of sparse low degree interpolating polynomials and their application to derivative-free optimization’, Math. Program.134, 223-257. · Zbl 1254.65072
[48] Bandeira, A. S., Scheinberg, K. and Vicente, L. N. (2014), ‘Convergence of trust-region methods based on probabilistic models’, SIAM J. Optim.24, 1238-1264. · Zbl 1311.90186
[49] Banichuk, N. V., Petrov, V. M. and Chernous’Ko, F. L. (1966), ‘The solution of variational and boundary value problems by the method of local variations’, USSR Comput. Math. Math. Phys.6, 1-21.
[50] Barton, R. R. and Ivey, J. S. Jr (1996), ‘Nelder-Mead simplex modifications for simulation optimization’, Manag. Sci.42, 954-973. · Zbl 0884.90118
[51] Barzilai, J. and Borwein, J. M. (1988), ‘Two-point step size gradient methods’, IMA J. Numer. Anal.8, 141-148. · Zbl 0638.65055
[52] Bauschke, H. H., Hare, W. L. and Moursi, W. M. (2014), ‘A derivative-free comirror algorithm for convex optimization’, Optim. Methods Software30, 706-726. · Zbl 1328.90104
[53] Baydin, A. G., Pearlmutter, B. A., Radul, A. A. and Siskind, J. M. (2018), ‘Automatic differentiation in machine learning: A survey’, J. Mach. Learn. Res.18(153), 1-43. · Zbl 06982909
[54] Belitz, P. and Bewley, T. (2013), ‘New horizons in sphere-packing theory, II: Lattice-based derivative-free optimization via global surrogates’, J. Global Optim.56, 61-91. · Zbl 1273.90241
[55] Belloni, A., Liang, T., Narayanan, H. and Rakhlin, A. (2015), Escaping the local minima via simulated annealing: Optimization of approximately convex functions. In 28th Conference on Learning Theory (COLT 2015). Proc. Mach. Learn. Res.40, 240-265.
[56] Belotti, P., Kirches, C., Leyffer, S., Linderoth, J., Luedtke, J. and Mahajan, A. (2013), Mixed-integer nonlinear optimization. In Acta Numerica, Vol. 22, Cambridge University Press, pp. 1-131. · Zbl 1291.65172
[57] Berahas, A. S., Byrd, R. H. and Nocedal, J. (2019), ‘Derivative-free optimization of noisy functions via quasi-Newton methods’, SIAM J. Optim., to appear. arXiv:1803.10173 · Zbl 1411.90359
[58] Bertsimas, D. and Nohadani, O. (2010), ‘Robust optimization with simulated annealing’, J. Global Optim.48, 323-334. · Zbl 1198.90402
[59] Bertsimas, D., Nohadani, O. and Teo, K. M. (2010), ‘Robust optimization for unconstrained simulation-based problems’, Oper. Res.58, 161-178. · Zbl 1226.90074
[60] Bhatnagar, S., Prasad, H. and Prashanth, L. A. (2013), Kiefer-Wolfowitz algorithm. In Stochastic Recursive Algorithms for Optimization: Simultaneous Perturbation Methods, , Springer, pp. 31-39.
[61] Bibi, A., Bergou, E. H., Sener, O., Ghanem, B. and Richtárik, P. (2019), A stochastic derivative-free optimization method with importance sampling. arXiv:1902.01272
[62] Billups, S. C., Larson, J. and Graf, P. (2013), ‘Derivative-free optimization of expensive functions with computational error using weighted regression’, SIAM J. Optim.55, 27-53. · Zbl 1295.90106
[63] Björkman, M. and Holmström, K. (2000), ‘Global optimization of costly nonconvex functions using radial basis functions’, Optim. Engng1, 373-397.
[64] Blanchet, J., Cartis, C., Menickelly, M. and Scheinberg, K. (2019), ‘Convergence rate analysis of a stochastic trust region method via submartingales’, INFORMS J. Optim., to appear. arXiv:1609.07428
[65] Blum, J. R., Approximation methods which converge with probability one, Ann. Math. Statist., 25, 382-386, (1954) · Zbl 0055.37806
[66] Blum, J. R., Multidimensional stochastic approximation methods, Ann. Math. Statist., 25, 737-744, (1954) · Zbl 0056.38305
[67] Boender, C. G. E., Rinnooy Kan, A. H. G., Timmer, G. T. and Stougie, L. (1982), ‘A stochastic method for global optimization’, Math. Program.22, 125-140. · Zbl 0525.90076
[68] Bogani, C., Gasparo, M. G. and Papini, A. (2009), ‘Generalized pattern search methods for a class of nonsmooth optimization problems with structure’, J. Comput. Appl. Math.229, 283-293. · Zbl 1166.65028
[69] Bortz, D. M. and Kelley, C. T. (1998), The simplex gradient and noisy optimization problems. In Computational Methods for Optimal Design and Control (Borggaard, J.et al., eds), , Birkhäuser, pp. 77-90. · Zbl 0927.65077
[70] Bottou, L., Curtis, F. E. and Nocedal, J. (2018), ‘Optimization methods for large-scale machine learning’, SIAM Review60, 223-311. · Zbl 1397.65085
[71] Box, G. E. P. and Draper, N. R. (1987), Empirical Model Building and Response Surfaces, Wiley. · Zbl 0614.62104
[72] Box, M. J., A new method of constrained optimization and a comparison with other methods, Comput. J., 8, 42-52, (1965) · Zbl 0142.11305
[73] Box, M. J., A comparison of several current optimization methods, and the use of transformations in constrained problems, Comput. J., 9, 67-77, (1966) · Zbl 0146.13304
[74] Brekelmans, R., Driessen, L., Hamers, H. and Den Hertog, D. (2005), ‘Constrained optimization involving expensive function evaluations: A sequential approach’, European J. Oper. Res.160, 121-138. · Zbl 1067.90151
[75] Brent, R. P., Algorithms for Minimization Without Derivatives, (1973), Prentice Hall · Zbl 0245.65032
[76] Brown, K. M. and Dennis, J. E. Jr (1971), ‘Derivative free analogues of the Levenberg-Marquardt and Gauss algorithms for nonlinear least squares approximation’, Numer. Math.18, 289-297. · Zbl 0235.65043
[77] Bubeck, S. and Cesa-Bianchi, N. (2012), ‘Regret analysis of stochastic and nonstochastic multi-armed bandit problems’, Found. Trends Mach. Learn.5, 1-122. · Zbl 1281.91051
[78] Bubeck, S., Lee, Y. T. and Eldan, R. (2017), Kernel-based methods for bandit convex optimization. In 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC 2017), ACM, pp. 72-85. · Zbl 1370.90175
[79] Bubeck, S., Munos, R., Stoltz, G. and Szepesvári, C. (2011a), ‘\({\mathcal{X}}\)-armed bandits’, J. Mach. Learn. Res.12, 1655-1695. · Zbl 1280.91038
[80] Bubeck, S., Stoltz, G. and Yu, J. Y. (2011b), Lipschitz bandits without the Lipschitz constant. In Algorithmic Learning Theory (ALT 2011) (Kivinen, J.et al., eds), , Springer, pp. 144-158. · Zbl 1349.60069
[81] Bueno, L. F., Friedlander, A., Martínez, J. M. and Sobral, F. N. C. (2013), ‘Inexact restoration method for derivative-free optimization with smooth constraints’, SIAM J. Optim.23, 1189-1213. · Zbl 1280.65050
[82] Buhmann, M. D. (2000), Radial basis functions. In Acta Numerica, Vol. 9, Cambridge University Press, pp. 1-38. · Zbl 1004.65015
[83] Bull, A. D., Convergence rates of efficient global optimization algorithms, J. Mach. Learn. Res., 12, 2879-2904, (2011) · Zbl 1280.90094
[84] Burke, J. V., Curtis, F. E., Lewis, A. S., Overton, M. L. and Simões, L. E. A. (2018), Gradient sampling methods for nonsmooth optimization. arXiv:1804.11003
[85] Bűrmen, Á., Olenšek, J. and Tuma, T. (2015), ‘Mesh adaptive direct search with second directional derivative-based Hessian update’, Comput. Optim. Appl.62, 693-715. · Zbl 1337.90063
[86] Bűrmen, Á., Puhan, J. and Tuma, T. (2006), ‘Grid restrained Nelder-Mead algorithm’, Comput. Optim. Appl.34, 359-375. · Zbl 1152.90658
[87] Carter, R. G., Gablonsky, J. M., Patrick, A., Kelley, C. T. and Eslinger, O. J. (2001), ‘Algorithms for noisy problems in gas transmission pipeline optimization’, Optim. Engng2, 139-157. · Zbl 1079.90624
[88] Cartis, C. and Roberts, L. (2017), A derivative-free Gauss-Newton method. arXiv:1710.11005
[89] Cartis, C. and Scheinberg, K. (2018), ‘Global convergence rate analysis of unconstrained optimization methods based on probabilistic models’, Math. Program.169, 337-375. · Zbl 1407.90307
[90] Cartis, C., Fiala, J., Marteau, B. and Roberts, L. (2018), Improving the flexibility and robustness of model-based derivative-free optimization solvers. arXiv:1804.00154
[91] Cartis, C., Gould, N. I. M. and Toint, P. L. (2012), ‘On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization’, SIAM J. Optim.22, 66-86. · Zbl 1250.90083
[92] Cartis, C., Gould, N. I. M. and Toint, P. L. (2018), Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints. arXiv:1811.01220
[93] Castle, B. (2012), Quasi-Newton methods for stochastic optimization and proximity-based methods for disparate information fusion. PhD thesis, Indiana University.
[94] Céa, J. (1971), Optimisation: Théorie et Algorithmes, Méthodes Mathématiques de l’Informatique, Dunod. · Zbl 0211.17402
[95] Chandramouli, G. and Narayanan, V. (2019), A scaled conjugate gradient based direct search algorithm for high dimensional box constrained derivative free optimization. arXiv:1901.05215
[96] Chang, K.-H., Stochastic Nelder-Mead simplex method: A new globally convergent direct search method for simulation optimization, European J. Oper. Res., 220, 684-694, (2012) · Zbl 1253.90178
[97] Chang, K.-H., Hong, L. J. and Wan, H. (2013), ‘Stochastic trust-region response-surface method (STRONG): A new response-surface framework for simulation optimization’, INFORMS J. Comput.25, 230-243.
[98] Chen, H. and Schmeiser, B. W. (2001), ‘Stochastic root finding via retrospective approximation’, IIE Trans.33, 259-275.
[99] Chen, R. (2015), Stochastic derivative-free optimization of noisy functions. PhD thesis, Lehigh University.
[100] Chen, R., Menickelly, M. and Scheinberg, K. (2018a), ‘Stochastic optimization using a trust-region method and random models’, Math. Program.169, 447-487. · Zbl 1401.90136
[101] Chen, X. and Kelley, C. T. (2016), ‘Optimization with hidden constraints and embedded Monte Carlo computations’, Optim. Engng17, 157-175. · Zbl 1364.65119
[102] Chen, X., Kelley, C. T., Xu, F. and Zhang, Z. (2018b), ‘A smoothing direct search method for Monte Carlo-based bound constrained composite nonsmooth optimization’, SIAM J. Sci. Comput.40, A2174-A2199. · Zbl 1461.65138
[103] Choi, T. D. and Kelley, C. T. (2000), ‘Superlinear convergence and implicit filtering’, SIAM J. Optim.10, 1149-1162. · Zbl 0994.65071
[104] Choi, T. D., Eslinger, O. J., Kelley, C. T., David, J. W. and Etheridge, M. (2000), ‘Optimization of automotive valve train components with implicit filtering’, Optim. Engng1, 9-27. · Zbl 1077.93523
[105] Ciccazzo, A., Latorre, V., Liuzzi, G., Lucidi, S. and Rinaldi, F. (2015), ‘Derivative-free robust optimization for circuit design’, J. Optim. Theory Appl.164, 842-861. · Zbl 1320.90100
[106] Cocchi, G., Liuzzi, G., Papini, A. and Sciandrone, M. (2018), ‘An implicit filtering algorithm for derivative-free multiobjective optimization with box constraints’, Comput. Optim. Appl.69, 267-296. · Zbl 1400.90269
[107] Colson, B. and Toint, P. L. (2001), ‘Exploiting band structure in unconstrained optimization without derivatives’, Optim. Engng2, 399-412. · Zbl 1035.90083
[108] Colson, B. and Toint, P. L. (2002), A derivative-free algorithm for sparse unconstrained optimization problems. In Trends in Industrial and Applied Mathematics (Siddiqi, A. H. and Kočvara, M., eds), , Springer, pp. 131-147.
[109] Colson, B. and Toint, P. L. (2005), ‘Optimizing partially separable functions without derivatives’, Optim. Methods Software20, 493-508. · Zbl 1152.90659
[110] Conejo, P. D., Karas, E. W. and Pedroso, L. G. (2015), ‘A trust-region derivative-free algorithm for constrained optimization’, Optim. Methods Software30, 1126-1145. · Zbl 1328.90160
[111] Conejo, P. D., Karas, E. W., Pedroso, L. G., Ribeiro, A. A. and Sachine, M. (2013), ‘Global convergence of trust-region algorithms for convex constrained minimization without derivatives’, Appl. Math. Comput.220, 324-330. · Zbl 1329.90170
[112] Conn, A. R. and Le Digabel, S. (2013), ‘Use of quadratic models with mesh-adaptive direct search for constrained black box optimization’, Optim. Methods Software28, 139-158. · Zbl 1270.90073
[113] Conn, A. R. and Toint, P. L. (1996), An algorithm using quadratic interpolation for unconstrained derivative free optimization. In Nonlinear Optimization and Applications (Di Pillo, G. and Giannessi, F., eds), Springer, pp. 27-47. · Zbl 0976.90102
[114] Conn, A. R. and Vicente, L. N. (2012), ‘Bilevel derivative-free optimization and its application to robust optimization’, Optim. Methods Software27, 561-577. · Zbl 1242.90227
[115] Conn, A. R., Gould, N. I. M. and Toint, P. L. (1991), ‘A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds’, SIAM J. Numer. Anal.28, 545-572. · Zbl 0724.65067
[116] Conn, A. R., Gould, N. I. M. and Toint, P. L. (2000), Trust-Region Methods, SIAM. · Zbl 0958.65071
[117] Conn, A. R., Scheinberg, K. and Toint, P. L. (1997a), On the convergence of derivative-free methods for unconstrained optimization. In Approximation Theory and Optimization: Tributes to M. J. D. Powell (Iserles, A. and Buhmann, M., eds), Cambridge University Press, pp. 83-108. · Zbl 1042.90617
[118] Conn, A. R., Scheinberg, K. and Toint, P. L. (1997b), ‘Recent progress in unconstrained nonlinear optimization without derivatives’, Math. Program.79, 397-414. · Zbl 0887.90154
[119] Conn, A. R., Scheinberg, K. and Toint, P. L. (1998), A derivative free optimization algorithm in practice. In 7th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, American Institute of Aeronautics and Astronautics. · Zbl 1042.90617
[120] Conn, A. R., Scheinberg, K. and Toint, P. L. (2001), DFO (derivative free optimization software). https://projects.coin-or.org/Dfo
[121] Conn, A. R., Scheinberg, K. and Vicente, L. N. (2008a), ‘Geometry of interpolation sets in derivative free optimization’, Math. Program.111, 141-172. · Zbl 1163.90022
[122] Conn, A. R., Scheinberg, K. and Vicente, L. N. (2008b), ‘Geometry of sample sets in derivative free optimization: Polynomial regression and underdetermined interpolation’, IMA J. Numer. Anal.28, 721-748. · Zbl 1157.65034
[123] Conn, A. R., Scheinberg, K. and Vicente, L. N. (2009a), ‘Global convergence of general derivative-free trust-region algorithms to first and second order critical points’, SIAM J. Optim.20, 387-415. · Zbl 1187.65062
[124] Conn, A. R., Scheinberg, K. and Vicente, L. N. (2009b), Introduction to Derivative-Free Optimization, SIAM. · Zbl 1163.49001
[125] Coope, I. D. and Price, C. J. (2000), ‘Frame based methods for unconstrained optimization’, J. Optim. Theory Appl.107, 261-274. · Zbl 0983.90074
[126] Coope, I. D. and Tappenden, R. (2019), ‘Efficient calculation of regular simplex gradients’, Comput. Optim. Appl.72, 561-588. · Zbl 1414.90363
[127] Custódio, A. L. and Madeira, J. F. A. (2015), ‘GLODS: Global and local optimization using direct search’, J. Global Optim.62, 1-28. · Zbl 1328.90161
[128] Custódio, A. L. and Madeira, J. F. A. (2016), MultiGLODS: Global and local multiobjective optimization using direct search. Technical report, Universidade Nova de Lisboa.
[129] Custódio, A. L. and Vicente, L. N. (2005), SID-PSM: A pattern search method guided by simplex derivatives for use in derivative-free optimization. http://www.mat.uc.pt/sid-psm
[130] Custódio, A. L. and Vicente, L. N. (2007), ‘Using sampling and simplex derivatives in pattern search methods’, SIAM J. Optim.18, 537-555.
[131] Custódio, A. L., Dennis, J. E. Jr and Vicente, L. N. (2008), ‘Using simplex gradients of nonsmooth functions in direct search methods’, IMA J. Numer. Anal.28, 770-784.
[132] Custódio, A. L., Madeira, J. F. A., Vaz, A. I. F. and Vicente, L. N. (2011), ‘Direct multisearch for multiobjective optimization’, SIAM J. Optim.21, 1109-1140. · Zbl 1230.90167
[133] Custódio, A. L., Rocha, H. and Vicente, L. N. (2009), ‘Incorporating minimum Frobenius norm models in direct search’, Comput. Optim. Appl.46, 265-278.
[134] Dai, L. (2016a), Convergence rates of finite difference stochastic approximation algorithms, I: General sampling. In Sensing and Analysis Technologies for Biomedical and Cognitive Applications (Dai, L.et al., eds), SPIE.
[135] Dai, L. (2016b), Convergence rates of finite difference stochastic approximation algorithms, II: Implementation via common random numbers. In Sensing and Analysis Technologies for Biomedical and Cognitive Applications (Dai, L.et al., eds), SPIE.
[136] D’Ambrosio, C., Nannicini, G. and Sartor, G. (2017), ‘MILP models for the selection of a small set of well-distributed points’, Oper. Res. Lett.45, 46-52. · Zbl 1409.90113
[137] Davis, C., Theory of positive linear dependence, Amer. J. Math., 76, 733-746, (1954) · Zbl 0058.25201
[138] Davis, P. (2005), Michael J. D. Powell: An oral history. In History of Numerical Analysis and Scientific Computing Project, SIAM.
[139] Davis, C. and Hare, W. L. (2013), ‘Exploiting known structures to approximate normal cones’, Math. Oper. Res.38, 665-681. · Zbl 1292.49032
[140] De Leone, R., Gaudioso, M. and Grippo, L. (1984), ‘Stopping criteria for linesearch methods without derivatives’, Math. Program.30, 285-300. · Zbl 0576.90087
[141] Dekel, O., Eldan, R. and Koren, T. (2015), Bandit smooth convex optimization: Improving the bias-variance tradeoff. In Advances in Neural Information Processing Systems 28 (Cortes, C.et al., eds), Curran Associates, pp. 2926-2934.
[142] Dener, A., Denchfield, A., Munson, T., Sarich, J., Wild, S. M., Benson, S. and Curfman Mcinnes, L. (2018), TAO 3.10 users manual. Technical Memorandum ANL/MCS-TM-322, Argonne National Laboratory.
[143] Deng, G. and Ferris, M. C. (2009), ‘Variable-number sample-path optimization’, Math. Program.117, 81-109. · Zbl 1165.90013
[144] Deng, G. and Ferris, M. C. (2006), Adaptation of the UOBYQA algorithm for noisy functions. In Proceedings of the 2006 Winter Simulation Conference, IEEE, pp. 312-319.
[145] Dennis, J. E. Jr and Torczon, V. (1991), ‘Direct search methods on parallel machines’, SIAM J. Optim.1, 448-474. · Zbl 0754.90051
[146] Dennis, J. E. Jr and Torczon, V. (1997), Managing approximation models in optimization. In Multidisciplinary Design Optimization: State-of-the-Art (Alexandrov, N. M. and Hussaini, M. Y., eds), SIAM, pp. 330-347.
[147] Dennis, J. E. Jr and Woods, D. J. (1987), Optimization on microcomputers: The Nelder-Mead simplex algorithm. In New Computing Environments: Microcomputers in Large-Scale Computing (Wouk, A., ed.), SIAM, pp. 116-122.
[148] Derman, C., An application of Chung’s lemma to the Kiefer-Wolfowitz stochastic approximation procedure, Ann. Math. Statist., 27, 532-536, (1956) · Zbl 0074.35502
[149] Diaz, G. I., Fokoue-Nkoutche, A., Nannicini, G. and Samulowitz, H. (2017), ‘An effective algorithm for hyperparameter optimization of neural networks’, IBM J. Res. Develop.61, 9:1-9:11.
[150] Diniz-Ehrhardt, M. A., Martínez, J. M. and Pedroso, L. G. (2011), ‘Derivative-free methods for nonlinear programming with general lower-level constraints’, Comput. Appl. Math.30, 19-52. · Zbl 1222.90060
[151] Diniz-Ehrhardt, M. A., Martínez, J. M. and Raydan, M. (2008), ‘A derivative-free nonmonotone line-search technique for unconstrained optimization’, J. Comput. Appl. Math.219, 383-397. · Zbl 1149.65041
[152] Dodangeh, M. and Vicente, L. N. (2016), ‘Worst case complexity of direct search under convexity’, Math. Program.155, 307-332. · Zbl 1338.90462
[153] Dodangeh, M., Vicente, L. N. and Zhang, Z. (2016), ‘On the optimal order of worst case complexity of direct search’, Optim. Lett.10, 699-708. · Zbl 1346.90764
[154] Dolan, E. D., Lewis, R. M. and Torczon, V. (2003), ‘On the local convergence of pattern search’, SIAM J. Optim.14, 567-583. · Zbl 1055.65074
[155] Duchi, J. C., Jordan, M. I., Wainwright, M. J. and Wibisono, A. (2015), ‘Optimal rates for zero-order convex optimization: The power of two function evaluations’, IEEE Trans. Inform. Theory61, 2788-2806. · Zbl 1359.90155
[156] Durrett, R., Probability: Theory and Examples, (2010), Cambridge University Press · Zbl 1202.60001
[157] Dvurechensky, P., Gasnikov, A. and Gorbunov, E. (2018), An accelerated method for derivative-free smooth stochastic convex optimization. arXiv:1802.09022
[158] Echebest, N., Schuverdt, M. L. and Vignau, R. P. (2015), ‘An inexact restoration derivative-free filter method for nonlinear programming’, Comput. Appl. Math.36, 693-718. · Zbl 1359.65092
[159] Ehrgott, M., Multicriteria Optimization, (2005), Springer · Zbl 1132.90001
[160] Elster, C. and Neumaier, A. (1995), ‘A grid algorithm for bound constrained optimization of noisy functions’, IMA J. Numer. Anal.15, 585-608. · Zbl 0831.65063
[161] Fabian, V. (1971), Stochastic approximation. In Optimizing Methods in Statistics, Elsevier, pp. 439-470.
[162] Fasano, G., Liuzzi, G., Lucidi, S. and Rinaldi, F. (2014), ‘A linesearch-based derivative-free approach for nonsmooth constrained optimization’, SIAM J. Optim.24, 959-992. · Zbl 1302.90207
[163] Fasano, G., Morales, J. L. and Nocedal, J. (2009), ‘On the geometry phase in model-based algorithms for derivative-free optimization’, Optim. Methods Software24, 145-154. · Zbl 1154.90589
[164] Fermi, E. and Metropolis, N. (1952), Numerical solution of a minimum problem. Technical report LA-1492, Los Alamos Scientific Laboratory of the University of California.
[165] Ferreira, P. S., Karas, E. W., Sachine, M. and Sobral, F. N. C. (2017), ‘Global convergence of a derivative-free inexact restoration filter algorithm for nonlinear programming’, Optimization66, 271-292. · Zbl 1365.90237
[166] Finkel, D. E. and Kelley, C. T. (2004), Convergence analysis of the DIRECT algorithm. Technical report 04-28, Center for Research in Scientific Computation, North Carolina State University. https://projects.ncsu.edu/crsc/reports/ftp/pdf/crsc-tr04-28.pdf
[167] Finkel, D. E. and Kelley, C. T. (2006), ‘Additive scaling and the DIRECT algorithm’, J. Global Optim.36, 597-608. · Zbl 1142.90488
[168] Finkel, D. E. and Kelley, C. T. (2009), ‘Convergence analysis of sampling methods for perturbed Lipschitz functions’, Pacific J. Optim.5, 339-350. · Zbl 1181.65084
[169] Flaxman, A., Kalai, A. and Mcmahan, B. (2005), Online convex optimization in the bandit setting: Gradient descent without a gradient. In 16th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA ’05), ACM, pp. 385-394. · Zbl 1297.90117
[170] Fletcher, R., Function minimization without evaluating derivatives: A review, Comput. J., 8, 33-41, (1965) · Zbl 0139.10401
[171] Fletcher, R., Practical Methods of Optimization, (1987), Wiley · Zbl 0905.65002
[172] Forrester, A., Sobester, A. and Keane, A. (2008), Engineering Design via Surrogate Modelling: A Practical Guide, Wiley.
[173] Forth, S., Hovland, P., Phipps, E., Utke, J. & Walther, A., eds (2012), Recent Advances in Algorithmic Differentiation, Springer. · Zbl 1247.65002
[174] Frandi, E. and Papini, A. (2013), ‘Coordinate search algorithms in multilevel optimization’, Optim. Methods Software29, 1020-1041. · Zbl 1305.90382
[175] Frazier, P. I. (2018), A tutorial on Bayesian optimization. arXiv:1807.02811
[176] Frimannslund, L. and Steihaug, T. (2007), ‘A generating set search method using curvature information’, Comput. Optim. Appl.38, 105-121. · Zbl 1171.90552
[177] Frimannslund, L. and Steihaug, T. (2010), A new generating set search algorithm for partially separable functions. In 4th International Conference on Advanced Engineering Computing and Applications in Sciences (ADVCOMP 2010), IARIA, pp. 65-70.
[178] Frimannslund, L. and Steihaug, T. (2011), ‘On a new method for derivative free optimization’, Internat. J. Adv. Software4, 244-255.
[179] Fu, M. C., Glover, F. W. and April, J. (2005), Simulation optimization: A review, new developments, and applications. In Proceedings of the 2005 Winter Simulation Conference, IEEE.
[180] Gao, F. and Han, L. (2012), ‘Implementing the Nelder-Mead simplex algorithm with adaptive parameters’, Comput. Optim. Appl.51, 259-277. · Zbl 1245.90121
[181] García-Palomares, U. M. and Rodríguez, J. F. (2002), ‘New sequential and parallel derivative-free algorithms for unconstrained minimization’, SIAM J. Optim.13, 79-96.
[182] García-Palomares, U. M., García-Urrea, I. J. and Rodríguez-Hernández, P. S. (2013), ‘On sequential and parallel non-monotone derivative-free algorithms for box constrained optimization’, Optim. Methods Software28, 1233-1261. · Zbl 1278.65088
[183] Garmanjani, R. and Vicente, L. N. (2012), ‘Smoothing and worst-case complexity for direct-search methods in nonsmooth optimization’, IMA J. Numer. Anal.33, 1008-1028. · Zbl 1272.65050
[184] Garmanjani, R., Jùdice, D. and Vicente, L. N. (2016), ‘Trust-region methods without using derivatives: Worst case complexity and the nonsmooth case’, SIAM J. Optim.26, 1987-2011.
[185] Gasnikov, A. V., Krymova, E. A., Lagunovskaya, A. A., Usmanova, I. N. and Fedorenko, F. A. (2017), ‘Stochastic online optimization: Single-point and multi-point non-linear multi-armed bandits; Convex and strongly-convex case’, Automat. Remote Control78, 224-234. · Zbl 1362.93165
[186] Gerencsér, L. (1997), Rate of convergence of moments of Spall’s SPSA method. In Stochastic Differential and Difference Equations, , Birkhäuser, pp. 67-75. · Zbl 1007.62524
[187] Ghadimi, S., Conditional gradient type methods for composite nonlinear and stochastic optimization, Math. Program., 173, 431-464, (2019) · Zbl 1410.90150
[188] Ghadimi, S. and Lan, G. (2013), ‘Stochastic first- and zeroth-order methods for nonconvex stochastic programming’, SIAM J. Optim.23, 2341-2368. · Zbl 1295.90026
[189] Gill, P. E., Murray, W. and Wright, M. H. (1981), Practical Optimization, Academic Press. · Zbl 0503.90062
[190] Gill, P. E., Murray, W., Saunders, M. A. and Wright, M. H. (1983), ‘Computing forward-difference intervals for numerical optimization’, SIAM J. Sci. Statist. Comput.4, 310-321. · Zbl 0514.65044
[191] Gilmore, P. and Kelley, C. T. (1995), ‘An implicit filtering algorithm for optimization of functions with many local minima’, SIAM J. Optim.5, 269-285. · Zbl 0828.65064
[192] Glass, H. and Cooper, L. (1965), ‘Sequential search: A method for solving constrained optimization problems’, J. Assoc. Comput. Mach.12, 71-82. · Zbl 0156.38804
[193] Golovin, D., Solnik, B., Moitra, S., Kochanski, G., Karro, J. and Sculley, D. (2017), Google Vizier: A service for black-box optimization. In 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’17), ACM, pp. 1487-1495.
[194] Gonzaga, C. C., Karas, E. and Vanti, M. (2004), ‘A globally convergent filter method for nonlinear programming’, SIAM J. Optim.14, 646-669. · Zbl 1079.90129
[195] Gould, N. I. M. and Toint, P. L. (2010), ‘Nonlinear programming without a penalty function or a filter’, Math. Program.122, 155-196. · Zbl 1216.90069
[196] Gramacy, R. B. and Le Digabel, S. (2015), ‘The mesh adaptive direct search algorithm with treed Gaussian process surrogates’, Pacific J. Optim.11, 419-447. · Zbl 1327.90308
[197] Grapiglia, G. N., Yuan, J. and Yuan, Y.-X. (2016), ‘A derivative-free trust-region algorithm for composite nonsmooth optimization’, Comput. Appl. Math.35, 475-499. · Zbl 1371.49014
[198] Gratton, S. and Vicente, L. N. (2014), ‘A merit function approach for direct search’, SIAM J. Optim.24, 1980-1998. · Zbl 1318.90077
[199] Gratton, S., Royer, C. W. and Vicente, L. N. (2016), ‘A second-order globally convergent direct-search method and its worst-case complexity’, Optimization65, 1105-1128. · Zbl 1338.90463
[200] Gratton, S., Royer, C. W. and Vicente, L. N. (2019a), ‘A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds’, Math. Program. doi:10.1007/s10107-018-1328-7
[201] Gratton, S., Royer, C. W., Vicente, L. N. and Zhang, Z. (2015), ‘Direct search based on probabilistic descent’, SIAM J. Optim.25, 1515-1541. · Zbl 1327.90395
[202] Gratton, S., Royer, C. W., Vicente, L. N. and Zhang, Z. (2018), ‘Complexity and global rates of trust-region methods based on probabilistic models’, IMA J. Numer. Anal.38, 1579-1597. · Zbl 1477.65093
[203] Gratton, S., Royer, C. W., Vicente, L. N. and Zhang, Z. (2019b), ‘Direct search based on probabilistic feasible descent for bound and linearly constrained problems’, Comput. Optim. Appl.72, 525-559. · Zbl 1420.90083
[204] Gratton, S., Toint, P. L. and Tröltzsch, A. (2011), ‘An active-set trust-region method for derivative-free nonlinear bound-constrained optimization’, Optim. Methods Software26, 873-894. · Zbl 1229.90138
[205] Gray, G. A. and Kolda, T. G. (2006), ‘Algorithm 856: APPSPACK 4.0: Asynchronous parallel pattern search for derivative-free optimization’, ACM Trans. Math. Software32, 485-507. · Zbl 1230.90196
[206] Griewank, A. (2003), A mathematical view of automatic differentiation. In Acta Numerica, Vol. 12, Cambridge University Press, pp. 321-398. · Zbl 1047.65012
[207] Griewank, A. and Walther, A. (2008), Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, SIAM. · Zbl 1159.65026
[208] Griewank, A., Walther, A., Fiege, S. and Bosse, T. (2016), ‘On Lipschitz optimization based on gray-box piecewise linearization’, Math. Program.158, 383-415. · Zbl 1350.49038
[209] Grippo, L. and Rinaldi, F. (2014), ‘A class of derivative-free nonmonotone optimization algorithms employing coordinate rotations and gradient approximations’, Comput. Optim. Appl.60, 1-33. · Zbl 1316.90065
[210] Grippo, L. and Sciandrone, M. (2007), ‘Nonmonotone derivative-free methods for nonlinear equations’, Comput. Optim. Appl.37, 297-328. · Zbl 1180.90310
[211] Grippo, L., Lampariello, F. and Lucidi, S. (1988), ‘Global convergence and stabilization of unconstrained minimization methods without derivatives’, J. Optim. Theory Appl.56, 385-406. · Zbl 0619.90063
[212] Gu, B., Huo, Z. and Huang, H. (2016), Zeroth-order asynchronous doubly stochastic algorithm with variance reduction. arXiv:1612.01425
[213] Gumma, E. A. E., Hashim, M. H. A. and Ali, M. M. (2014), ‘A derivative-free algorithm for linearly constrained optimization problems’, Comput. Optim. Appl.57, 599-621. · Zbl 1304.90193
[214] Gunzburger, M. D., Webster, C. G. and Zhang, G. (2014), Stochastic finite element methods for partial differential equations with random input data. In Acta Numerica, Vol. 23, Cambridge University Press, pp. 521-650. · Zbl 1398.65299
[215] Gutmann, H.-M., A radial basis function method for global optimization, J. Global Optim., 19, 201-227, (2001) · Zbl 0972.90055
[216] Han, L. and Liu, G. (2004), ‘On the convergence of the UOBYQA method’, J. Appl. Math. Comput.16, 125-142. · Zbl 1129.90380
[217] Hansen, P., Jaumard, B. and Lu, S.-H. (1991), ‘On the number of iterations of Piyavskii’s global optimization algorithm’, Math. Oper. Res.16, 334-350. · Zbl 0742.90069
[218] Hare, W. L., Numerical analysis of \({\mathcal{V}}{\mathcal{U}}\)-decomposition, \({\mathcal{U}}\)-gradient, and \({\mathcal{U}}\)-Hessian approximations, SIAM J. Optim., 24, 1890-1913, (2014) · Zbl 1318.49053
[219] Hare, W. L., Compositions of convex functions and fully linear models, Optim. Lett., 11, 1217-1227, (2017) · Zbl 1408.90323
[220] Hare, W. L. and Lewis, A. S. (2005), ‘Estimating tangent and normal cones without calculus’, Math. Oper. Res.30, 785-799. · Zbl 1254.49008
[221] Hare, W. L. and Lucet, Y. (2013), ‘Derivative-free optimization via proximal point methods’, J. Optim. Theory Appl.160, 204-220. · Zbl 1291.90317
[222] Hare, W. L. and Macklem, M. (2013), ‘Derivative-free optimization methods for finite minimax problems’, Optim. Methods Software28, 300-312. · Zbl 1270.90099
[223] Hare, W. L. and Nutini, J. (2013), ‘A derivative-free approximate gradient sampling algorithm for finite minimax problems’, Comput. Optim. Appl.56, 1-38. · Zbl 1300.90064
[224] Hare, W. L., Planiden, C. and Sagastizábal, C. (2019), A derivative-free \({\mathcal{V}}{\mathcal{U}}\)-algorithm for convex finite-max problems. arXiv:1903.11184
[225] Hazan, E., Koren, T. and Levy, K. Y. (2014), Logistic regression: Tight bounds for stochastic and online optimization. In 27th Conference on Learning Theory (COLT 2014) (Balcan, M. F.et al., eds). Proc. Mach. Learn. Res.35, 197-209.
[226] He, J., Verstak, A., Sosonkina, M. and Watson, L. T. (2009a), ‘Performance modeling and analysis of a massively parallel DIRECT, part 2’, Internat. J. High Performance Comput. Appl.23, 29-41.
[227] He, J., Verstak, A., Watson, L. T. and Sosonkina, M. (2007), ‘Design and implementation of a massively parallel version of DIRECT’, Comput. Optim. Appl.40, 217-245. · Zbl 1181.90138
[228] He, J., Verstak, A., Watson, L. T. and Sosonkina, M. (2009b), ‘Performance modeling and analysis of a massively parallel DIRECT, part 1’, Internat. J. High Performance Comput. Appl.23, 14-28.
[229] He, J., Watson, L. T. and Sosonkina, M. (2009c), ‘Algorithm 897: VTDIRECT95: Serial and parallel codes for the global optimization algorithm DIRECT’, ACM Trans. Math. Software36, 1-24. · Zbl 1364.65127
[230] Homem-De-Mello, T. and Bayraksan, G. (2014), ‘Monte Carlo sampling-based methods for stochastic optimization’, Surv. Oper. Res. Manag. Sci.19, 56-85.
[231] Hooke, R. and Jeeves, T. A. (1961), ‘Direct search” solution of numerical and statistical problems’, J. Assoc. Comput. Mach.8, 212-229. · Zbl 0111.12501
[232] Hough, P. D. and Meza, J. C. (2002), ‘A class of trust-region methods for parallel optimization’, SIAM J. Optim.13, 264-282. · Zbl 1014.65051
[233] Hough, P. D., Kolda, T. G. and Torczon, V. J. (2001), ‘Asynchronous parallel pattern search for nonlinear optimization’, SIAM J. Sci. Comput.23, 134-156. · Zbl 0990.65067
[234] Hu, X., Prashanth, L. A., György, A. and Szepesvári, C. (2016), (Bandit) convex optimization with biased noisy gradient oracles. In 19th International Conference on Artificial Intelligence and Statistics. Proc. Mach. Learn. Res.51, 819-828.
[235] Hutchison, D. W. and Spall, J. C. (2013), Stochastic approximation. In Encyclopedia of Operations Research and Management Science, Springer, pp. 1470-1476.
[236] Huyer, W. and Neumaier, A. (1999), ‘Global optimization by multilevel coordinate search’, J. Global Optim.14, 331-355. · Zbl 0956.90045
[237] Huyer, W. and Neumaier, A. (2008), ‘SNOBFIT: Stable noisy optimization by branch and fit’, ACM Trans. Math. Software35, 9:1-9:25.
[238] Ilievski, I., Akhtar, T., Feng, J. and Shoemaker, C. (2017), Efficient hyperparameter optimization for deep learning algorithms using deterministic RBF surrogates. In 31st AAAI Conference on Artificial Intelligence (AAAI ’17), pp. 822-829.
[239] Jamieson, K. G., Nowak, R. D. and Recht, B. (2012), Query complexity of derivative-free optimization. In Advances in Neural Information Processing Systems 25 (Pereira, F.et al., eds), Curran Associates, pp. 2672-2680.
[240] Jones, D. R., Perttunen, C. D. and Stuckman, B. E. (1993), ‘Lipschitzian optimization without the Lipschitz constant’, J. Optim. Theory Appl.79, 157-181. · Zbl 0796.49032
[241] Jones, D. R., Schonlau, M. and Welch, W. J. (1998), ‘Efficient global optimization of expensive black-box functions’, J. Global Optim.13, 455-492. · Zbl 0917.90270
[242] Karmanov, V. G., Convergence estimates for iterative minimization methods, USSR Comput. Math. Math. Phys., 14, 1-13, (1974) · Zbl 0297.90076
[243] Kelley, C. T., Detection and remediation of stagnation in the Nelder-Mead algorithm using a sufficient decrease condition, SIAM J. Optim., 10, 43-55, (1999) · Zbl 0962.65048
[244] Kelley, C. T., Iterative Methods for Optimization, (1999), SIAM · Zbl 0934.90082
[245] Kelley, C. T. (2003), Implicit filtering and nonlinear least squares problems. In System Modeling and Optimization XX (Sachs, E. W. and Tichatschke, R., eds), , Springer, pp. 71-90. · Zbl 1096.93031
[246] Kelley, C. T., Implicit Filtering, (2011), SIAM · Zbl 1246.68017
[247] Khan, K. A., Larson, J. and Wild, S. M. (2018), ‘Manifold sampling for optimization of nonconvex functions that are piecewise linear compositions of smooth components’, SIAM J. Optim.28, 3001-3024. · Zbl 1407.90351
[248] Kiefer, J. and Wolfowitz, J. (1952), ‘Stochastic estimation of the maximum of a regression function’, Ann. Math. Statist.22, 462-466. · Zbl 0049.36601
[249] Kim, S. and Ryu, J.-H. (2011), ‘A trust-region algorithm for bi-objective stochastic optimization’, Procedia Comput. Sci.4, 1422-1430.
[250] Kim, S. and Zhang, D. (2010), Convergence properties of direct search methods for stochastic optimization. In Proceedings of the 2010 Winter Simulation Conference, IEEE.
[251] Kim, S., Pasupathy, R. and Henderson, S. G. (2015), A guide to sample average approximation. In Handbook of Simulation Optimization (Fu, M., ed.), Vol. 216, , Springer, pp. 207-243.
[252] Kiwiel, K. C., A nonderivative version of the gradient sampling algorithm for nonsmooth nonconvex optimization, SIAM J. Optim., 20, 1983-1994, (2010) · Zbl 1205.90230
[253] Kleinberg, R., Slivkins, A. and Upfal, E. (2008), Multi-armed bandits in metric spaces. In 40th Annual ACM Symposium on Theory of Computing (STOC 2008), ACM, pp. 681-690. · Zbl 1231.91048
[254] Kleinman, N. L., Spall, J. C. and Naiman, D. Q. (1999), ‘Simulation-based optimization with stochastic approximation using common random numbers’, Manag. Sci.45, 1570-1578. · Zbl 0946.90040
[255] Knowles, J. and Corne, D. (2002), On metrics for comparing nondominated sets. In Proceedings of the 2002 Congress on Evolutionary Computation, Vol. 1, IEEE, pp. 711-716.
[256] Kolda, T. G., Lewis, R. M. and Torczon, V. (2006), ‘Stationarity results for generating set search for linearly constrained optimization’, SIAM J. Optim.17, 943-968. · Zbl 1126.90076
[257] Kolda, T. G., Lewis, R. M. and Torczon, V. J. (2003), ‘Optimization by direct search: New perspectives on some classical and modern methods’, SIAM Review45, 385-482. · Zbl 1059.90146
[258] Konečný, J. and Richtárik, P. (2014), Simple complexity analysis of simplified direct search. arXiv:1410.0390
[259] Kushner, H. and Yin, G. (2003), Stochastic Approximation and Recursive Algorithms and Applications, Springer. · Zbl 1026.62084
[260] Kushner, H. J. and Huang, H. (1979), ‘Rates of convergence for stochastic approximation type algorithms’, SIAM J. Control Optim.17, 607-617. · Zbl 0418.93093
[261] La Cruz, W., A projected derivative-free algorithm for nonlinear equations with convex constraints, Optim. Methods Software, 29, 24-41, (2014) · Zbl 1285.65028
[262] La Cruz, W., Martínez, J. M. and Raydan, M. (2006), ‘Spectral residual method without gradient information for solving large-scale nonlinear systems of equations’, Math. Comput.75(255), 1429-1449. · Zbl 1122.65049
[263] Lagarias, J. C., Poonen, B. and Wright, M. H. (2012), ‘Convergence of the restricted Nelder-Mead algorithm in two dimensions’, SIAM J. Optim.22, 501-532. · Zbl 1246.49029
[264] Lagarias, J. C., Reeds, J. A., Wright, M. H. and Wright, P. E. (1998), ‘Convergence properties of the Nelder-Mead simplex algorithm in low dimensions’, SIAM J. Optim.9, 112-147. · Zbl 1005.90056
[265] Lai, T. L. and Robbins, H. (1985), ‘Asymptotically efficient adaptive allocation rules’, Adv. Appl. Math.6, 4-22. · Zbl 0568.62074
[266] Larson, J. and Billups, S. C. (2016), ‘Stochastic derivative-free optimization using a trust region framework’, Comput. Optim. Appl.64, 619-645. · Zbl 1381.90098
[267] Larson, J. and Wild, S. M. (2016), ‘A batch, derivative-free algorithm for finding multiple local minima’, Optim. Engng17, 205-228. · Zbl 1364.90363
[268] Larson, J. and Wild, S. M. (2018), ‘Asynchronously parallel optimization solver for finding multiple minima’, Math. Program. Comput.10, 303-332. · Zbl 1398.65123
[269] Larson, J., Leyffer, S., Palkar, P. and Wild, S. M. (2019), A method for convex black-box integer global optimization. arXiv:1903.11366
[270] Larson, J., Menickelly, M. and Wild, S. M. (2016), ‘Manifold sampling for \(\ell _{1}\) nonconvex optimization’, SIAM J. Optim.26, 2540-2563. · Zbl 1351.90167
[271] Latorre, V., Habal, H., Graeb, H. and Lucidi, S. (2019), ‘Derivative free methodologies for circuit worst case analysis’, Optim. Lett. doi:10.1007/s11590-018-1364-5
[272] Lazar, M. and Jarre, F. (2016), ‘Calibration by optimization without using derivatives’, Optim. Engng17, 833-860. · Zbl 1364.90408
[273] Le Digabel, S. (2011), ‘Algorithm 909: NOMAD: Nonlinear optimization with the MADS algorithm’, ACM Trans. Math. Software37, 44:1-44:15. · Zbl 1365.65172
[274] Le Digabel, S. and Wild, S. M. (2015), A taxonomy of constraints in black-box simulation-based optimization. arXiv:1505.07881
[275] Le Gratiet, L. and Cannamela, C. (2015), ‘Cokriging-based sequential design strategies using fast cross-validation techniques for multi-fidelity computer codes’, Technometrics57, 418-427.
[276] L’Ecuyer, P. and Yin, G. (1998), ‘Budget-dependent convergence rate of stochastic approximation’, SIAM J. Optim.8, 217-247. · Zbl 0911.60003
[277] Lee, H., Gramacy, R. B., Linkletter, C. and Gray, G. A. (2011), ‘Optimization subject to hidden constraints via statistical emulation’, Pacific J. Optim.7, 467-478. · Zbl 05964910
[278] Lemarechal, C. & Mifflin, R., eds (1978), Nonsmooth Optimization: Proceedings of an IIASA Workshop, Pergamon Press.
[279] Levenberg, K., A method for the solution of certain non-linear problems in least squares, Quart. Appl. Math., 2, 164-168, (1944) · Zbl 0063.03501
[280] Lewis, R. M. and Torczon, V. (1999), ‘Pattern search algorithms for bound constrained minimization’, SIAM J. Optim.9, 1082-1099. · Zbl 1031.90047
[281] Lewis, R. M. and Torczon, V. (2000), ‘Pattern search methods for linearly constrained minimization’, SIAM J. Optim.10, 917-941. · Zbl 1031.90048
[282] Lewis, R. M. and Torczon, V. (2002), ‘A globally convergent augmented Lagrangian pattern search algorithm for optimization with general constraints and simple bounds’, SIAM J. Optim.12, 1075-1089. · Zbl 1011.65030
[283] Lewis, R. M. and Torczon, V. (2010), A direct search approach to nonlinear programming problems using an augmented Lagrangian method with explicit treatment of the linear constraints. Technical report WM-CS-2010-01, Department of Computer Science, College of William and Mary.
[284] Lewis, R. M., Torczon, V. and Trosset, M. W. (2000), ‘Direct search methods: Then and now’, J. Comput. Appl. Math.124, 191-207. · Zbl 0969.65055
[285] Leyffer, S., It’s to solve problems: An interview with Roger Fletcher, Optima, 99, 1-6, (2015)
[286] Li, D.-H. and Fukushima, M. (2000), ‘A derivative-free line search and global convergence of Broyden-like method for nonlinear equations’, Optim. Methods Software13, 181-201. · Zbl 0960.65076
[287] Li, Q. and Li, D.-H. (2011), ‘A class of derivative-free methods for large-scale nonlinear monotone equations’, IMA J. Numer. Anal.31, 1625-1635. · Zbl 1241.65047
[288] Liu, Q., Zeng, J. and Yang, G. (2015), ‘MrDIRECT: A multilevel robust DIRECT algorithm for global optimization problems’, J. Global Optim.62, 205-227. · Zbl 1326.90065
[289] Liu, S., Kailkhura, B., Chen, P.-Y., Ting, P., Chang, S. and Amini, L. (2018), Zeroth-order stochastic variance reduction for nonconvex optimization. In Advances in Neural Information Processing Systems 31 (Bengio, S.et al., eds), Curran Associates, pp. 3731-3741.
[290] Liuzzi, G. and Lucidi, S. (2009), ‘A derivative-free algorithm for inequality constrained nonlinear programming via smoothing of an \(\ell _{\infty }\) penalty function’, SIAM J. Optim.20, 1-29. · Zbl 1187.65065
[291] Liuzzi, G., Lucidi, S. and Rinaldi, F. (2011), ‘Derivative-free methods for bound constrained mixed-integer optimization’, Comput. Optim. Appl.53, 505-526. · Zbl 1257.90058
[292] Liuzzi, G., Lucidi, S. and Rinaldi, F. (2015), ‘Derivative-free methods for mixed-integer constrained optimization problems’, J. Optim. Theory Appl.164, 933-965. · Zbl 1321.90087
[293] Liuzzi, G., Lucidi, S. and Rinaldi, F. (2016), ‘A derivative-free approach to constrained multiobjective nonsmooth optimization’, SIAM J. Optim.26, 2744-2774. · Zbl 1358.90133
[294] Liuzzi, G., Lucidi, S. and Rinaldi, F. (2018), An algorithmic framework based on primitive directions and nonmonotone line searches for black box problems with integer variables. Report 6471, Optimization Online. http://www.optimization-online.org/DB_HTML/2018/02/6471.html
[295] Liuzzi, G., Lucidi, S. and Sciandrone, M. (2006), ‘A derivative-free algorithm for linearly constrained finite minimax problems’, SIAM J. Optim.16, 1054-1075. · Zbl 1131.90074
[296] Liuzzi, G., Lucidi, S. and Sciandrone, M. (2010), ‘Sequential penalty derivative-free methods for nonlinear constrained optimization’, SIAM J. Optim.20, 2614-2635. · Zbl 1223.65045
[297] Lucidi, S. and Sciandrone, M. (2002a), ‘A derivative-free algorithm for bound constrained optimization’, Comput. Optim. Appl.21, 119-142. · Zbl 0988.90033
[298] Lucidi, S. and Sciandrone, M. (2002b), ‘On the global convergence of derivative-free methods for unconstrained optimization’, SIAM J. Optim.13, 97-116. · Zbl 1027.90112
[299] Lucidi, S., Sciandrone, M. and Tseng, P. (2002), ‘Objective-derivative-free methods for constrained optimization’, Math. Program.92, 37-59. · Zbl 1024.90062
[300] Ma, J. and Zhang, X. (2009), ‘Pattern search methods for finite minimax problems’, J. Appl. Math. Comput.32, 491-506. · Zbl 1196.65104
[301] Madsen, K. (1975), Minimax solution of non-linear equations without calculating derivatives. In Nondifferentiable Optimization (Balinski, M. L. and Wolfe, P., eds), , Springer, pp. 110-126. · Zbl 0364.90085
[302] Maggiar, A., Wächter, A., Dolinskaya, I. S. and Staum, J. (2018), ‘A derivative-free trust-region algorithm for the optimization of functions smoothed via Gaussian convolution using adaptive multiple importance sampling’, SIAM J. Optim.28, 1478-1507. · Zbl 1390.90410
[303] Marazzi, M. and Nocedal, J. (2002), ‘Wedge trust region methods for derivative free optimization’, Math. Program.91, 289-305. · Zbl 1049.90134
[304] March, A. and Willcox, K. (2012), ‘Provably convergent multifidelity optimization algorithm not requiring high-fidelity derivatives’, AIAA J.50, 1079-1089.
[305] Marquardt, D. W., An algorithm for least-squares estimation of nonlinear parameters, J. Soc. Ind. Appl. Math., 11, 431-441, (1963) · Zbl 0112.10505
[306] Martínez, J. M. and Sobral, F. N. C. (2012), ‘Constrained derivative-free optimization on thin domains’, J. Global Optim.56, 1217-1232. · Zbl 1272.90091
[307] Matyas, J., Random optimization, Automat. Remote Control, 26, 246-253, (1965) · Zbl 0151.22802
[308] May, J. H. (1974), Linearly constrained nonlinear programming: A solution method that does not require analytic derivatives. PhD thesis, Yale University.
[309] May, J. H., Solving nonlinear programs without using analytic derivatives, Oper. Res., 27, 457-484, (1979) · Zbl 0417.90079
[310] Mckinney, R. L., Positive bases for linear spaces, Trans. Amer. Math. Soc., 103, 131-131, (1962) · Zbl 0115.32901
[311] Mckinnon, K. I. M., Convergence of the Nelder-Mead simplex method to a nonstationary point, SIAM J. Optim., 9, 148-158, (1998) · Zbl 0962.65049
[312] Menickelly, M. (2017), Random models in nonlinear optimization. PhD thesis, Lehigh University.
[313] Menickelly, M. and Wild, S. M. (2019), ‘Derivative-free robust optimization by outer approximations’, Math. Program. doi:10.1007/s10107-018-1326-9
[314] Mersha, A. G. and Dempe, S. (2011), ‘Direct search algorithm for bilevel programming problems’, Comput. Optim. Appl.49, 1-15. · Zbl 1242.90240
[315] Mifflin, R., A superlinearly convergent algorithm for minimization without evaluating derivatives, Math. Program., 9, 100-117, (1975) · Zbl 0327.90027
[316] Mockus, J., Bayesian Approach to Global Optimization: Theory and Applications, (1989), Springer · Zbl 0693.49001
[317] Moré, J. J. (1978), The Levenberg-Marquardt algorithm: Implementation and theory. In Numerical Analysis: Dundee 1977 (Watson, G. A., ed.), , Springer, pp. 105-116.
[318] Moré, J. J. and Sorensen, D. C. (1983), ‘Computing a trust region step’, SIAM J. Sci. Statist. Comput.4, 553-572. · Zbl 0551.65042
[319] Moré, J. J. and Wild, S. M. (2009), ‘Benchmarking derivative-free optimization algorithms’, SIAM J. Optim.20, 172-191. · Zbl 1187.90319
[320] Moré, J. J. and Wild, S. M. (2011), ‘Estimating computational noise’, SIAM J. Sci. Comput.33, 1292-1314. · Zbl 1243.65016
[321] Moré, J. J. and Wild, S. M. (2012), ‘Estimating derivatives of noisy simulations’, ACM Trans. Math. Software38, 19:1-19:21.
[322] Moré, J. J. and Wild, S. M. (2014), ‘Do you trust derivatives or differences?’, J. Comput. Phys.273, 268-277. · Zbl 1352.65668
[323] Morini, B., Porcelli, M. and Toint, P. L. (2018), ‘Approximate norm descent methods for constrained nonlinear systems’, Math. Comput.87(311), 1327-1351. · Zbl 1383.65051
[324] Müller, J., MISO: Mixed-integer surrogate optimization framework, Optim. Engng, 17, 177-203, (2016) · Zbl 1364.90230
[325] Müller, J. and Day, M. (2019), ‘Surrogate optimization of computationally expensive black-box problems with hidden constraints’, INFORMS J. Comput., to appear.
[326] Müller, J. and Woodbury, J. D. (2017), ‘GOSAC: Global optimization with surrogate approximation of constraints’, J. Global Optim.69, 117-136. · Zbl 1373.90118
[327] Müller, J., Shoemaker, C. A. and Piché, R. (2013a), ‘SO-I: A surrogate model algorithm for expensive nonlinear integer programming problems including global optimization applications’, J. Global Optim.59, 865-889.
[328] Müller, J., Shoemaker, C. A. and Piché, R. (2013b), ‘SO-MI: A surrogate model algorithm for computationally expensive nonlinear mixed-integer black-box global optimization problems’, Comput. Oper. Res.40, 1383-1400. · Zbl 1352.90067
[329] Munos, R. (2011), Optimistic optimization of a deterministic function without the knowledge of its smoothness. In Advances in Neural Information Processing Systems 24 (Shawe-Taylor, J.et al., eds), Curran Associates, pp. 783-791.
[330] Nash, S. G., A multigrid approach to discretized optimization problems, Optim. Methods Software, 14, 99-116, (2000) · Zbl 0988.90040
[331] Nazareth, L. and Tseng, P. (2002), ‘Gilding the lily: A variant of the Nelder-Mead algorithm based on golden-section search’, Comput. Optim. Appl.22, 133-144. · Zbl 1007.90062
[332] Nelder, J. A. and Mead, R. (1965), ‘A simplex method for function minimization’, Comput. J.7, 308-313. · Zbl 0229.65053
[333] Nesterov, Y., Introductory Lectures on Convex Optimization: A Basic Course, (2004), Springer · Zbl 1086.90045
[334] Nesterov, Y. and Spokoiny, V. (2017), ‘Random gradient-free minimization of convex functions’, Found. Comput. Math.17, 527-566. · Zbl 1380.90220
[335] Neumaier, A. (2004), Complete search in continuous global optimization and constraint satisfaction. In Acta Numerica, Vol. 13, Cambridge University Press, pp. 271-369. · Zbl 1113.90124
[336] Neumaier, A., Fendl, H., Schilly, H. and Leitner, T. (2011), ‘VXQR: Derivative-free unconstrained optimization based on QR factorizations’, Soft Comput.15, 2287-2298.
[337] Newby, E. and Ali, M. M. (2015), ‘A trust-region-based derivative free algorithm for mixed integer programming’, Comput. Optim. Appl.60, 199-229. · Zbl 1308.90112
[338] Oeuvray, R. (2005), Trust-region methods based on radial basis functions with application to biomedical imaging. PhD thesis, EPFL.
[339] Oeuvray, R. and Bierlaire, M. (2009), ‘BOOSTERS: A derivative-free algorithm based on radial basis functions’, Internat. J. Model. Simul.29, 26-36.
[340] Olsson, P.-M. (2014), Methods for network optimization and parallel derivative-free optimization. PhD thesis, Linköping University.
[341] Omojokun, E. O. (1989), Trust region algorithms for optimization with nonlinear equality and inequality constraints. PhD thesis, University of Colorado at Boulder.
[342] Paquette, C. and Scheinberg, K. (2018), A stochastic line search method with convergence rate analysis. arXiv:1807.07994
[343] Pasupathy, R., On choosing parameters in retrospective-approximation algorithms for stochastic root finding and simulation optimization, Oper. Res., 58, 889-901, (2010) · Zbl 1228.90069
[344] Pasupathy, R., Glynn, P., Ghosh, S. and Hashemi, F. S. (2018), ‘On sampling rates in simulation-based recursions’, SIAM J. Optim.28, 45-73. · Zbl 1380.93168
[345] Patel, N. R., Smith, R. L. and Zabinsky, Z. B. (1989), ‘Pure adaptive search in Monte Carlo optimization’, Math. Program.43, 317-328. · Zbl 0671.90047
[346] Peckham, G., A new method for minimising a sum of squares without calculating gradients, Comput. J., 13, 418-420, (1970) · Zbl 0227.65037
[347] Picheny, V., Gramacy, R. B., Wild, S. M. and Le Digabel, S. (2016), Bayesian optimization under mixed constraints with a slack-variable augmented Lagrangian. In Advances in Neural Information Processing Systems 29 (Lee, D. D.et al., eds), Curran Associates, pp. 1435-1443.
[348] Plantenga, T. D. (2009), HOPSPACK 2.0 user manual. Technical report SAND2009-6265, Sandia National Laboratories.
[349] Polak, E., Computational Methods in Optimization: A Unified Approach, (1971), Academic Press
[350] Polak, E. and Wetter, M. (2006), ‘Precision control for generalized pattern search algorithms with adaptive precision function evaluations’, SIAM J. Optim.16, 650-669. · Zbl 1113.90148
[351] Polyak, B. T., Introduction to Optimization, (1987), Optimization Software
[352] Porcelli, M. and Toint, P. L. (2017), ‘BFO: A trainable derivative-free brute force optimizer for nonlinear bound-constrained optimization and equilibrium computations with continuous and discrete variables’, ACM Trans. Math. Software44, 1-25. · Zbl 1484.65136
[353] Pourmohamad, T. (2016), Combining multivariate stochastic process models with filter methods for constrained optimization. PhD thesis, UC Santa Cruz.
[354] Powell, M. J. D., An efficient method for finding the minimum of a function of several variables without calculating derivatives, Comput. J., 7, 155-162, (1964) · Zbl 0132.11702
[355] Powell, M. J. D., A method for minimizing a sum of squares of non-linear functions without calculating derivatives, Comput. J., 7, 303-307, (1965) · Zbl 0142.11601
[356] Powell, M. J. D., A view of unconstrained minimization algorithms that do not require derivatives, ACM Trans. Math. Software, 1, 97-107, (1975) · Zbl 0303.65059
[357] Powell, M. J. D. (1994), A direct search optimization method that models the objective and constraint functions by linear interpolation. In Advances in Optimization and Numerical Analysis (Gomez, S. and Hennart, J. P., eds), , Springer, pp. 51-67. · Zbl 0826.90108
[358] Powell, M. J. D. (1997), Trust region calculations revisited. In Numerical Analysis 1997 (Griffiths, D. F.et al., eds), , Addison Wesley Longman, pp. 193-211. · Zbl 0903.65052
[359] Powell, M. J. D. (1998a), Direct search algorithms for optimization calculations. In Acta Numerica, Vol. 7, Cambridge University Press, pp. 287-336. · Zbl 0911.65050
[360] Powell, M. J. D. (1998b), The use of band matrices for second derivative approximations in trust region algorithms. In Advances in Nonlinear Programming (Yuan, Y., ed.), , Springer, pp. 3-28. · Zbl 0908.90241
[361] Powell, M. J. D., On the Lagrange functions of quadratic models that are defined by interpolation, Optim. Methods Software, 16, 289-309, (2001) · Zbl 1027.90091
[362] Powell, M. J. D., UOBYQA: Unconstrained optimization by quadratic approximation, Math. Program., 92, 555-582, (2002) · Zbl 1014.65050
[363] Powell, M. J. D., On trust region methods for unconstrained minimization without derivatives, Math. Program., 97, 605-623, (2003) · Zbl 1106.90382
[364] Powell, M. J. D., Least Frobenius norm updating of quadratic models that satisfy interpolation conditions, Math. Program., 100, 183-215, (2004) · Zbl 1146.90526
[365] Powell, M. J. D., On the use of quadratic models in unconstrained minimization without derivatives, Optim. Methods Software, 19, 399-411, (2004) · Zbl 1140.90513
[366] Powell, M. J. D. (2004c), On updating the inverse of a KKT matrix. Technical report DAMTP 2004/NA01, University of Cambridge.
[367] Powell, M. J. D. (2006), The NEWUOA software for unconstrained optimization without derivatives. In Large-Scale Nonlinear Optimization (Di Pillo, G. and Roma, M., eds), , Springer, pp. 255-297. · Zbl 1108.90005
[368] Powell, M. J. D. (2007), A view of algorithms for optimization without derivatives. Technical report DAMTP 2007/NA03, University of Cambridge.
[369] Powell, M. J. D., Developments of NEWUOA for minimization without derivatives, IMA J. Numer. Anal., 28, 649-664, (2008) · Zbl 1154.65049
[370] Powell, M. J. D. (2009), The BOBYQA algorithm for bound constrained optimization without derivatives. Technical report DAMTP 2009/NA06, University of Cambridge.
[371] Powell, M. J. D., On the convergence of a wide range of trust region methods for unconstrained optimization, IMA J. Numer. Anal., 30, 289-301, (2010) · Zbl 1187.65069
[372] Powell, M. J. D., On the convergence of trust region algorithms for unconstrained minimization without derivatives, Comput. Optim. Appl., 53, 527-555, (2012) · Zbl 1284.90096
[373] Powell, M. J. D., Beyond symmetric Broyden for updating quadratic models in minimization without derivatives, Math. Program., 138, 475-500, (2013) · Zbl 1266.65095
[374] Powell, M. J. D., On fast trust region methods for quadratic models with linear constraints, Math. Program. Comput., 7, 237-267, (2015) · Zbl 1325.65084
[375] Press, W. H., Teukolsky, S. A., Vetterling, W. T. and Flannery, B. P. (2007), Numerical Recipes in Fortran: The Art of Scientific Computing, third edition, Cambridge University Press. · Zbl 1132.65001
[376] Price, C. J. and Toint, P. L. (2006), ‘Exploiting problem structure in pattern search methods for unconstrained optimization’, Optim. Methods Software21, 479-491. · Zbl 1136.90516
[377] Price, C. J., Coope, I. D. and Byatt, D. (2002), ‘A convergent variant of the Nelder-Mead algorithm’, J. Optim. Theory Appl.113, 5-19. · Zbl 1172.90508
[378] Ralston, M. L. and Jennrich, R. I. (1978), ‘Dud: A derivative-free algorithm for nonlinear least squares’, Technometrics20, 7-14. · Zbl 0422.65006
[379] Rashid, K., Ambani, S. and Cetinkaya, E. (2012), ‘An adaptive multiquadric radial basis function method for expensive black-box mixed-integer nonlinear constrained optimization’, Engng Optim.45, 185-206.
[380] Rastrigin, L. A., The convergence of the random search method in the extremal control of many-parameter system, Automat. Remote Control, 24, 1337-1342, (1963)
[381] Reddi, S. J., Hefny, A., Sra, S., Poczos, B. and Smola, A. (2016), Stochastic variance reduction for nonconvex optimization. In 33rd International Conference on Machine Learning (Balcan, M. F. and Weinberger, K. Q., eds). Proc. Mach. Learn. Res.48, 314-323.
[382] Regier, J., Jordan, M. I. and Mcauliffe, J. (2017), Fast black-box variational inference through stochastic trust-region optimization. In Advances in Neural Information Processing Systems 30 (Guyon, I.et al., eds), Curran Associates, pp. 2402-2411.
[383] Regis, R. G., Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points, Engng Optim., 46, 218-243, (2013)
[384] Regis, R. G., The calculus of simplex gradients, Optim. Lett., 9, 845-865, (2015) · Zbl 1351.90168
[385] Regis, R. G., On the properties of positive spanning sets and positive bases, Optim. Engng, 17, 229-262, (2016) · Zbl 1364.90364
[386] Regis, R. G. and Shoemaker, C. A. (2007), ‘A stochastic radial basis function method for the global optimization of expensive functions’, INFORMS J. Comput.19, 457-509. · Zbl 1241.90192
[387] Regis, R. G. and Wild, S. M. (2017), ‘CONORBIT: Constrained optimization by radial basis function interpolation in trust regions’, Optim. Methods Software32, 552-580. · Zbl 1367.65091
[388] Rinnooy Kan, A. H. G. and Timmer, G. T. (1987a), ‘Stochastic global optimization methods, I: Clustering methods’, Math. Program.39, 27-56. · Zbl 0634.90066
[389] Rinnooy Kan, A. H. G. and Timmer, G. T. (1987b), ‘Stochastic global optimization methods, II: Multi level methods’, Math. Program.39, 57-78. · Zbl 0634.90067
[390] Rios, L. M. and Sahinidis, N. V. (2013), ‘Derivative-free optimization: A review of algorithms and comparison of software implementations’, J. Global Optim.56, 1247-1293. · Zbl 1272.90116
[391] Robbins, H., Some aspects of the sequential design of experiments, Bull. Amer. Math. Soc., 58, 527-536, (1952) · Zbl 0049.37009
[392] Robbins, H. and Monro, S. (1951), ‘A stochastic approximation method’, Ann. Math. Statist.22, 400-407. · Zbl 0054.05901
[393] Rockafellar, R. and Wets, R. J.-B. (2009), Variational Analysis, Springer. · Zbl 0888.49001
[394] Rosenbrock, H. H., An automatic method for finding the greatest or least value of a function, Comput. J., 3, 175-184, (1960)
[395] Ruppert, D. (1991), Stochastic approximation. In Handbook of Sequential Analysis (Ghosh, B. K. and Sen, P. K., eds), , CRC Press, pp. 503-529.
[396] Russo, D. and Van Roy, B. (2016), ‘An information-theoretic analysis of Thompson sampling’, J. Mach. Learn. Res.17, 2442-2471. · Zbl 1360.62030
[397] Rykov, A. S., Simplex direct search algorithms, Automat. Remote Control, 41, 784-793, (1980) · Zbl 0457.90066
[398] Ryu, J. H. and Kim, S. (2014), ‘A derivative-free trust-region method for biobjective optimization’, SIAM J. Optim.24, 334-362. · Zbl 1291.90230
[399] Sacks, J., Asymptotic distribution of stochastic approximation procedures, Ann. Math. Statist., 29, 373-405, (1958) · Zbl 0229.62010
[400] Saha, A. and Tewari, A. (2011), Improved regret guarantees for online smooth convex optimization with bandit feedback. In 14th International Conference on Artifical Intelligence and Statistics (AISTATS). Proc. Mach. Learn. Res.15, 636-642.
[401] Sampaio, P. R. and Toint, P. L. (2015), ‘A derivative-free trust-funnel method for equality-constrained nonlinear optimization’, Comput. Optim. Appl.61, 25-49. · Zbl 1311.90187
[402] Sampaio, P. R. and Toint, P. L. (2016), ‘Numerical experience with a derivative-free trust-funnel method for nonlinear optimization problems with general nonlinear constraints’, Optim. Methods Software31, 511-534. · Zbl 1369.90138
[403] Sankaran, S., Audet, C. and Marsden, A. L. (2010), ‘A method for stochastic constrained optimization using derivative-free surrogate pattern search and collocation’, J. Comput. Phys.229, 4664-4682. · Zbl 1193.65100
[404] Scheinberg, K. and Toint, P. L. (2010), ‘Self-correcting geometry in model-based algorithms for derivative-free unconstrained optimization’, SIAM J. Optim.20, 3512-3532. · Zbl 1209.65017
[405] Shahriari, B., Swersky, K., Wang, Z., Adams, R. P. and De Freitas, N. (2016), ‘Taking the human out of the loop: A review of Bayesian optimization’, Proc. IEEE104, 148-175.
[406] Shamir, O. (2013), On the complexity of bandit and derivative-free stochastic convex optimization. In 26th Annual Conference on Learning Theory (COLT 2013) (Shalev-Shwartz, S. and Steinwart, I., eds). Proc. Mach. Learn. Res.30, 3-24.
[407] Shamir, O., An optimal algorithm for bandit and zero-order convex optimization with two-point feedback, J. Mach. Learn. Res., 18, 52, 1-11, (2017) · Zbl 1440.90049
[408] Shashaani, S., Hashemi, F. S. and Pasupathy, R. (2018), ‘ASTRO-DF: A class of adaptive sampling trust-region algorithms for derivative-free stochastic optimization’, SIAM J. Optim.28, 3145-3176. · Zbl 1403.90541
[409] Shashaani, S., Hunter, S. R. and Pasupathy, R. (2016), ASTRO-DF: Adaptive sampling trust-region optimization algorithms, heuristics, and numerical experience. In 2016 Winter Simulation Conference (WSC), IEEE.
[410] Shoemaker, C. A. and Regis, R. G. (2003), MAPO: using a committee of algorithm-experts for parallel optimization of costly functions. In Proceedings of the ACM Symposium on Parallel Algorithms and Architectures, ACM, pp. 242-243.
[411] Spall, J. C., Multivariate stochastic approximation using a simultaneous perturbation gradient approximation, IEEE Trans. Automat. Control, 37, 332-341, (1992) · Zbl 0745.60110
[412] Spall, J. C., Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control, (2005), Wiley
[413] Spendley, W. (1969), Nonlinear least squares fitting using a modified simplex minimization method. In Optimization (Fletcher, R., ed.), Academic Press, pp. 259-270. · Zbl 0194.47502
[414] Spendley, W., Hext, G. R. and Himsworth, F. R. (1962), ‘Sequential application of simplex designs in optimisation and evolutionary operation’, Technometrics4, 441-461. · Zbl 0121.35603
[415] Srebro, N., Sridharan, K. and Tewari, A. (2011), On the universality of online mirror descent. In Advances in Neural Information Processing Systems 24 (Shawe-Taylor, J.et al., eds), Curran Associates, pp. 2645-2653.
[416] Sriver, T. A., Chrissis, J. W. and Abramson, M. A. (2009), ‘Pattern search ranking and selection algorithms for mixed variable simulation-based optimization’, European J. Oper. Res.198, 878-890. · Zbl 1176.90269
[417] Stich, S. U., Müller, C. L. and Gärtner, B. (2013), ‘Optimization of convex functions with random pursuit’, SIAM J. Optim.23, 1284-1309.
[418] Taddy, M., Lee, H. K. H., Gray, G. A. and Griffin, J. D. (2009), ‘Bayesian guided pattern search for robust local optimization’, Technometrics51, 389-401.
[419] Torczon, V., On the convergence of the multidirectional search algorithm, SIAM J. Optim., 1, 123-145, (1991) · Zbl 0752.90076
[420] Torczon, V., On the convergence of pattern search algorithms, SIAM J. Optim., 7, 1-25, (1997) · Zbl 0884.65053
[421] Törn, A. and Žilinskas, A. (1989), Global Optimization, Springer.
[422] Tröltzsch, A., A sequential quadratic programming algorithm for equality-constrained optimization without derivatives, Optim. Lett., 10, 383-399, (2016) · Zbl 1343.90093
[423] Tseng, P., Fortified-descent simplicial search method: A general approach, SIAM J. Optim., 10, 269-288, (1999) · Zbl 1030.90122
[424] Valko, M., Carpentier, A. and Munos, R. (2013), Stochastic simultaneous optimistic optimization. In 30th International Conference on Machine Learning (ICML 13). Proc. Mach. Learn. Res.28, 19-27.
[425] Van Dyke, B. and Asaki, T. J. (2013), ‘Using QR decomposition to obtain a new instance of mesh adaptive direct search with uniformly distributed polling directions’, J. Optim. Theory Appl.159, 805-821. · Zbl 1284.90081
[426] Vanden Berghen, F. (2004), CONDOR: A constrained, non-linear, derivative-free parallel optimizer for continuous, high computing load, noisy objective functions. PhD thesis, Université Libre de Bruxelles.
[427] Vanden Berghen, F. and Bersini, H. (2005), ‘CONDOR: A new parallel, constrained extension of Powell’s UOBYQA algorithm: Experimental results and comparison with the DFO algorithm’, J. Comput. Appl. Math.181, 157-175. · Zbl 1072.65088
[428] Verdério, A., Karas, E. W., Pedroso, L. G. and Scheinberg, K. (2017), ‘On the construction of quadratic models for derivative-free trust-region algorithms’, EURO J. Comput. Optim.5, 501-527. · Zbl 1386.90181
[429] Vicente, L. N., Worst case complexity of direct search, EURO J. Comput. Optim., 1, 143-153, (2013) · Zbl 1304.90198
[430] Vicente, L. N. and Custódio, A. (2012), ‘Analysis of direct searches for discontinuous functions’, Math. Program.133, 299-325.
[431] Vu, K. K., D’Ambrosio, C., Hamadi, Y. and Liberti, L. (2016), ‘Surrogate-based methods for black-box optimization’, Internat. Trans. Oper. Res.24, 393-424. · Zbl 1366.90196
[432] Wang, Y., Du, S. S., Balakrishnan, S. and Singh, A. (2018), Stochastic zeroth-order optimization in high dimensions. In 21st International Conference on Artificial Intelligence and Statistics (Storkey, A. and Perez-Cruz, F., eds). Proc. Mach. Learn. Res.84, 1356-1365.
[433] Wendland, H., Scattered Data Approximation, (2005), Cambridge University Press · Zbl 1075.65021
[434] Wild, S. M. (2008a), Derivative-free optimization algorithms for computationally expensive functions. PhD thesis, Cornell University.
[435] Wild, S. M. (2008b), MNH: A derivative-free optimization algorithm using minimal norm Hessians. In Tenth Copper Mountain Conference on Iterative Methods. http://grandmaster.colorado.edu/∼copper/2008/SCWinners/Wild.pdf
[436] Wild, S. M. (2017), Solving derivative-free nonlinear least squares problems with POUNDERS. In Advances and Trends in Optimization with Engineering Applications (Terlaky, T.et al., eds), SIAM, pp. 529-540.
[437] Wild, S. M. and Shoemaker, C. A. (2011), ‘Global convergence of radial basis function trust region derivative-free algorithms’, SIAM J. Optim.21, 761-781. · Zbl 1397.65024
[438] Wild, S. M. and Shoemaker, C. A. (2013), ‘Global convergence of radial basis function trust-region algorithms for derivative-free optimization’, SIAM Review55, 349-371. · Zbl 1270.65028
[439] Wild, S. M., Regis, R. G. and Shoemaker, C. A. (2008), ‘ORBIT: Optimization by radial basis function interpolation in trust-regions’, SIAM J. Sci. Comput.30, 3197-3219. · Zbl 1178.65065
[440] Winfield, D. H. (1969) Function and functional optimization by interpolation in data tables. PhD thesis, Harvard University.
[441] Winfield, D. H., Function minimization by interpolation in a data table, J. Inst. Math. Appl., 12, 339-347, (1973) · Zbl 0274.90060
[442] Woods, D. J. (1985), An interactive approach for solving multi-objective optimization problems. PhD thesis, Rice University.
[443] Wright, M. H. (1995), Direct search methods: Once scorned, now respectable. In Numerical Analysis 1995 (Griffiths, D. F. and Watson, G. A., eds), , Addison Wesley Longman, pp. 191-208. · Zbl 0844.65057
[444] Xiong, S., Qian, P. Z. G. and Wu, C. F. J. (2013), ‘Sequential design and analysis of high-accuracy and low-accuracy computer codes’, Technometrics55, 37-46.
[445] Xu, J. and Zikatanov, L. (2017), Algebraic multigrid methods. In Acta Numerica, Vol. 26, Cambridge University Press, pp. 591-721. · Zbl 1378.65182
[446] Yu, W.-C., Positive basis and a class of direct search techniques, Scientia Sinica, Special Issue of Mathematics, 1, 53-67, (1979)
[447] Yuan, Y.-X., Conditions for convergence of trust region algorithms for nonsmooth optimization, Math. Program., 31, 220-228, (1985) · Zbl 0576.90080
[448] Zabinsky, Z. B. and Smith, R. L. (1992), ‘Pure adaptive search in global optimization’, Math. Program.53, 323-338. · Zbl 0756.90086
[449] Zhang, D. and Lin, G.-H. (2014), ‘Bilevel direct search method for leader – follower problems and application in health insurance’, Comput. Oper. Res.41, 359-373. · Zbl 1348.91083
[450] Zhang, H. and Conn, A. R. (2012), ‘On the local convergence of a derivative-free algorithm for least-squares minimization’, Comput. Optim. Appl.51, 481-507. · Zbl 1268.90043
[451] Zhang, H., Conn, A. R. and Scheinberg, K. (2010), ‘A derivative-free algorithm for least-squares minimization’, SIAM J. Optim.20, 3555-3576. · Zbl 1213.65091
[452] Zhang, L., Yang, T., Jin, R. and Zhou, Z.-H. (2015), Online bandit learning for a special class of non-convex losses. In 29th AAAI Conference on Artificial Intelligence (AAAI ’15), AAAI Press, pp. 3158-3164.
[453] Zhang, Z., Sobolev seminorm of quadratic functions with applications to derivative-free optimization, Math. Program., 146, 77-96, (2014) · Zbl 1315.90063
[454] Zhigljavsky, A. A., Theory of Global Random Search, (1991), Springer
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.