Jones, L. K. Local greedy approximation for nonlinear regression and neural network training. (English) Zbl 1105.62354 Ann. Stat. 28, No. 5, 1379-1389 (2000). Summary: A criterion for local estimation and approximation in nonlinear regression and neural network training is introduced and motivated. \(N\)th-order greedy approximation for the regression (or target) function based on the criterion is shown to converge at rate \(O(1/N^{1/2})\) in the nonsampling case. Cited in 1 Document MSC: 62J02 General nonlinear regression 62H12 Estimation in multivariate analysis 62M45 Neural nets and related approaches to inference from stochastic processes × Cite Format Result Cite Review PDF Full Text: DOI References: [1] Barron, A. R. (1993). Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inform. Theory 40 930-945. · Zbl 0818.68126 · doi:10.1109/18.256500 [2] Bottou, L. and Vapnik, V. (1992). Local learning algorithms. Neural Computation 4 888-900. [3] Blum, A. L. and Rivest, R. L. (1992). Training a 3-Node Neural Network is NP-Complete. Neural Networks 5 117-127. [4] DeVore, R. A. and Temlyakov, V. N. (1996). Some remarks on greedy algorithms. Adv. Comput. Math 5 173-187. · Zbl 0857.65016 · doi:10.1007/BF02124742 [5] Donohue, M. J., Gurvitz, L., Darken, C. and Sontag, E. (1994). Rates of convex approximation in non-Hilbert spaces. Constr. Approx. 13 187-220. · Zbl 0876.41016 · doi:10.1007/BF02678464 [6] Flick, T. E., Jones, L. K., Priest, R. and Herman, C.(1990). Projection pursuit classification. Pattern Recognition 23 1367-1376. [7] Friedman, J. H. (1999). Greedy function approximation: a gradient boosting machine. Technical report, Stanford Univ. · Zbl 1043.62034 [8] Friedman, J. H. and Stuetzle, W. (1981). Projection pursuit regression. J. Amer. Statist. Assoc. 76 817-823. JSTOR: · doi:10.2307/2287576 [9] Huber, P. J. (1985). Projection pursuit. Ann. Statist. 13 435-475. · Zbl 0595.62059 · doi:10.1214/aos/1176349519 [10] Jones, L. K. (1987). On a conjuecture of Huber concerning the convergence of projection pursuit regression. Ann. Statist. 15 880-882. · Zbl 0664.62061 · doi:10.1214/aos/1176350382 [11] Jones, L. K. (1992). A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Ann. Statist. 20 608-613. · Zbl 0746.62060 · doi:10.1214/aos/1176348546 [12] Jones, L. K. (1994). Good weights and hyperbolic kernels for neural networks, projection pursuit, and pattern classification: Fourier strategies for extracting information from high-dimensional data. IEEE Trans. Inform. Theory 40 439-454. · Zbl 0941.68670 · doi:10.1109/18.312166 [13] Jones, L. K. (1997). The Computational intractability of training sigmoidal neural networks. IEEE Trans. Inform. Theory 43 167-173. · Zbl 0874.68255 · doi:10.1109/18.567673 [14] Lee, W. S. and Bartlett, P. L., and Williamson, R. C. (1996). Efficient agnostic learning of neural networks with bounded fan-in. IEEE Trans. Inform. Theory 42 2118-2132. · Zbl 0874.68253 · doi:10.1109/18.556601 [15] Rejto, L. and Walter, G. G. (1992). Remarks on projection pursuit regression and density estimation. Stochastic Anal. Appl. 10 213-222. · Zbl 0763.62034 · doi:10.1080/07362999208809264 [16] Vu, V. H. (1998). On the infeasibility of training neural networks with small mean-squared error. IEEE Trans. Inform. Theory 44 2892-2900. · Zbl 0981.68138 · doi:10.1109/18.737520 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.