×

zbMATH — the first resource for mathematics

Limiting behavior of derivative approximation techniques as the number of points tends to infinity on a fixed interval in \(\mathbb{R}\). (English) Zbl 07305141
Summary: We consider the question of numerically approximating the derivative of a smooth function using only function evaluations. In particular, we examine the regression gradient, the generalized simplex gradient and the generalized centered simplex gradient, three numerical techniques based on using function values at a collection of sample points to construct ‘best-fit’ linear models. Under some conditions, these gradient approximations have error bounds dependent on the number of sample points used, the Lipschitz constant of the true gradient, and the geometry of the sample set. Perhaps counter-intuitively, as the number of sample points increases (to infinity) on a fixed domain, the error bounds can increase (to infinity). In this work, we first explore the behavior of the error bound for generalized simplex gradients of a single-variable function \((f:\mathbb{R}\mapsto\mathbb{R})\). Thereafter, we investigate the behavior of the absolute error for these three gradient approximation techniques as the number of sample points tends to infinity. Under reasonable assumptions, we prove that the absolute errors remain bounded as the number of sample points increases to infinity on a fixed interval.
MSC:
65D25 Numerical differentiation
90C56 Derivative-free methods and methods using generalized derivatives
Software:
KELLEY; NEWUOA; ORBIT; UOBYQA
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] Nocedal, J.; Wright, S. J., (Numerical Optimization. Numerical Optimization, Springer Series in Operations Research (1999), Springer-Verlag: Springer-Verlag New York)
[2] Conn, A. R.; Scheinberg, K.; Vicente, L. N., (Introduction to Derivative-Free Optimization. Introduction to Derivative-Free Optimization, MPS/SIAM Book Series on Optimization, vol. 8 (2009), SIAM) · Zbl 1163.49001
[3] Audet, C.; Hare, W., Derivative-Free and Blackbox Optimization (2017), Springer International Publishing AG: Springer International Publishing AG Switzerland · Zbl 1391.90001
[4] Kelley, C. T., Detection and remediation of stagnation in the Nelder-Mead algorithm using a sufficient decrease condition, SIAM J. Optim., 10, 1, 43-55 (1999) · Zbl 0962.65048
[5] Conn, A. R.; Scheinberg, K.; Vicente, L. N., Geometry of interpolation sets in derivative free optimization, Math. Program., 111, 1-2, Ser. B, 141-172 (2008) · Zbl 1163.90022
[6] Conn, A. R.; Scheinberg, K.; Vicente, L. N., Geometry of sample sets in derivative-free optimization: polynomial regression and underdetermined interpolation, IMA J. Numer. Anal., 28, 4, 721-748 (2008) · Zbl 1157.65034
[7] Kelley, C. T., (Iterative Methods for Optimization. Iterative Methods for Optimization, Frontiers in Applied Mathematics, vol. 18 (1999), Society for Industrial and Applied Mathematics (SIAM): Society for Industrial and Applied Mathematics (SIAM) Philadelphia, PA) · Zbl 0934.90082
[8] Conn, A. R.; Toint, Ph. L., Nonlinear optimization and applications, (Chapter an Algorithm using Quadratic Interpolation for Unconstrained Derivative Free Optimization (1996), Springer: Springer US), 27-47 · Zbl 0976.90102
[9] Powell, M. J.D., UOBYQA: unconstrained optimization by quadratic approximation, Math. Program., 92, 3, Ser. B, 555-582 (2002), ISMP 2000, Part 2 (Atlanta, GA) · Zbl 1014.65050
[10] Powell, M. J.D., Developments of NEWUOA for minimization without derivatives, IMA J. Numer. Anal., 28, 4, 649-664 (2008) · Zbl 1154.65049
[11] Custódio, A. L.; Dennis, J. E.; Vicente, L. N., Using simplex gradients of nonsmooth functions in direct search methods, IMA J. Numer. Anal., 28, 4, 770-784 (2008) · Zbl 1156.65059
[12] Regis, R. G., The calculus of simplex gradients, Optim. Lett., 9, 5, 845-865 (2015) · Zbl 1351.90168
[13] Schonlau, M.; Jones, D. R.; Welch, W. J., Global versus local search in constrained optimization of computer models, (New Developments and Applications in Experimental Design. New Developments and Applications in Experimental Design, IMS Lecture Notes - Monograph Series, no. 34 (1998), Institute of Mathematical Statistics), 11-25
[14] Regis, R. G.; Shoemaker, C. A., Parallel radial basis function methods for the global optimization of expensive functions, European J. Oper. Res., 182, 2, 514-535 (2007) · Zbl 1178.90279
[15] Wild, S. M.; Regis, R. G.; Shoemaker, C. A., ORBIT: optimization by radial basis function interpolation in trust-regions, SIAM J. Sci. Comput., 30, 6, 3197-3219 (2008) · Zbl 1178.65065
[16] Wild, S. M.; Shoemaker, C., Global convergence of radial basis function trust region derivative-free algorithms, SIAM J. Optim., 21, 3, 761-781 (2011) · Zbl 1397.65024
[17] Billups, S. C.; Larson, J.; Graf, P., Derivative-free optimization of expensive functions with computational error using weighted regression, SIAM J. Optim., 23, 1, 27-53 (2013) · Zbl 1295.90106
[18] Audet, C.; Hare, W., (Bagirov, A.; Gaudioso, M.; Karmitsa, N.; Mäkelä, M.; Taheri, S., Model-Based Methods in Derivative-Free Nonsmooth Optimization. Model-Based Methods in Derivative-Free Nonsmooth Optimization, Numerical Nonsmooth Optimization (2020), Springer: Springer Cham)
[19] Horn, R. A.; Johnson, C. R., Matrix Analysis (2013), Cambridge University Press: Cambridge University Press Cambridge · Zbl 1267.15001
[20] Burden, R. L.; Faires, D. J.; Burden, A. M., Numerical Analysis (2016), Brooks/Cole Cengage Learning
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.