On best data approximation. (English) Zbl 0596.41039

Summary: Suppose we are given a set of data \(\{a_{jk}\}\) which are the values of a function f and its derivatives at a set of sample points \(\{x_ 1,...,x_ n\}\), say \[ a_{jk}=f^{(j)}(x_ k),\quad k=1,...,n. \] If we wish to approximate this data using a function from an N-dimensional space \(S_ N\), what would be the ”best” algorithm? Let us first assume that all bits of information are of equal importance. If it happens that \(N=n\), then Lagrange interpolation is probably the most natural procedure to use and in fact can be proved to be ”optimal” both in the sense of the ”optimal recovery scheme” and in the scheme to be described in this paper. If, in addition, the functions under consideration are sufficiently smooth and N is a multiple of n, then Hermite interpolation should prove to be the correct answer. The interesting problem arises when n does not divide N or when some bits of data are more important than others. This paper attempts to provide one way of arriving at a ”natural” answer to this question.


41A50 Best approximation, Chebyshev systems
41A65 Abstract approximation theory (approximation in normed linear spaces and other abstract spaces)