Estimating the approximation error in learning theory.

*(English)* Zbl 1079.68089
Summary: Let $B$ be a Banach space and $(\cal H, \Vert \cdot \Vert_H)$ be a dense, imbedded subspace. For $a \in B$, its distance to the ball of $\cal H$ with radius $R$ (denoted as $I(a, R)$) tends to zero when $R$ tends to infinity. We are interested in the rate of this convergence. This approximation problem arose from the study of learning theory, where $B$ is the $L_2$ space and $\cal H$ is a reproducing kernel Hilbert space.
The class of elements having $I(a, R) = O(R^{-r})$ with $r > 0$ is an interpolation space of the couple $(B,\cal H)$. The rate of convergence can often be realized by linear operators. In particular, this is the case when $\cal H$ is the range of a compact, symmetric, and strictly positive definite linear operator on a separable Hilbert space $B$. For the kernel approximation studied in learning theory, the rate depends on the regularity of the kernel function. This yields error estimates for the approximation by reproducing kernel Hilbert spaces. When the kernel is smooth, the convergence is slow and a logarithmic convergence rate is presented for analytic kernels in this paper. The purpose of our results is to provide some theoretical estimates, including the constants, for the approximation error required for the learning theory.

##### MSC:

68T05 | Learning and adaptive systems |

41A25 | Rate of convergence, degree of approximation |