Recent zbMATH articles in MSC 41A25https://zbmath.org/atom/cc/41A252021-06-15T18:09:00+00:00WerkzeugAccurate sampling formula for approximating the partial derivatives of bivariate analytic functions.https://zbmath.org/1460.940302021-06-15T18:09:00+00:00"Asharabi, Rashad M."https://zbmath.org/authors/?q=ai:asharabi.rashad-m"Prestin, Jurgen"https://zbmath.org/authors/?q=ai:prestin.jurgenSummary: The bivariate sinc-Gauss sampling formula is introduced in \textit{R. M. Asharabi} and \textit{J. Prestin} [IMA J. Numer. Anal. 36, No. 2, 851--871 (2016; Zbl 1433.94047)] to approximate analytic functions of two variables which satisfy certain growth condition. In this paper, we apply this formula to approximate partial derivatives of any order for entire and holomorphic functions on an infinite horizontal strip domain using only finitely many samples of the function itself. The rigorous error analysis is carried out with sharp estimates based on a complex analytic approach. The convergence rate of this technique will be of exponential type, and it has a high accuracy in comparison with the accuracy of the bivariate classical sampling formula. Several computational examples are exhibited, demonstrating the exactness of the obtained results.Compactly supported quasi-tight multiframelets with high balancing orders and compact framelet transforms.https://zbmath.org/1460.420552021-06-15T18:09:00+00:00"Han, Bin"https://zbmath.org/authors/?q=ai:han.bin.1|han.bin"Lu, Ran"https://zbmath.org/authors/?q=ai:lu.ranSummary: Framelets derived from refinable (vector) functions via the popular oblique extension principle (OEP) are of interest in both theory and applications. Though OEP can increase vanishing moments of framelet generators to improve sparsity, it has a serious shortcoming for scalar framelets: the associated discrete framelet transform is often not compact and deconvolution is unavoidable. On the other hand, in sharp contrast to the extensively studied scalar framelets, OEP-based multiframelets are far from well understood. In this paper, we prove that from any compactly supported refinable vector function having at least two entries, one can always construct through OEP a compactly supported quasi-tight multiframelet such that all framelet generators have the highest possible order of vanishing moments, and its underlying discrete framelet transform is compact and balanced. The key ingredient of our proof is a newly developed normal form of matrix-valued filters, which greatly facilitates the study of multiframelets.Strong converse result for weighted approximation by Baskakov-Kantorovich operator.https://zbmath.org/1460.410132021-06-15T18:09:00+00:00"Gadjev, Ivan"https://zbmath.org/authors/?q=ai:gadjev.ivan"Uluchev, Rumen"https://zbmath.org/authors/?q=ai:uluchev.rumen-kSummary: We study weighted approximation of functions in \(L_p\)-norm by a Kantorovich modifcation of the classical Baskakov operator, where the weight has the form \((1 + x)^\alpha\), \(\alpha< 0\). Defining an appropriate \(K\)-functional we prove a strong converse inequality of type B for the rate of approximation.
For the entire collection see [Zbl 1453.41001].Converse estimates for the simultaneous approximation by Bernstein polynomials with integer coeffcients.https://zbmath.org/1460.410092021-06-15T18:09:00+00:00"Draganov, Borislav R."https://zbmath.org/authors/?q=ai:draganov.borislav-rSummary: We prove a weak converse estimate for the simultaneous approximation by several forms of the Bernstein polynomials with integer coeffcients. It is stated in terms of moduli of smoothness. In particular, it yields a big \(O\)-characterization of the rate of that approximation. We also show that the approximation process generated by these Bernstein polynomials with integer coeffcients is saturated. We identify its saturation rate and the trivial class.
For the entire collection see [Zbl 1453.41001].Linear \(k\)-monotonicity preserving algorithms and their approximation properties.https://zbmath.org/1460.410102021-06-15T18:09:00+00:00"Sidorov, S. P."https://zbmath.org/authors/?q=ai:sidorov.sergei-petrovichSummary: This paper examines the problem of finding the linear algorithm (operator) of finite rank \(n\) (i.e. with a \(n\)-dimensional range) which gives the minimal error of approximation of identity operator on some set over all finite rank \(n\) linear operators preserving the cone of \(k\)-monotonicity functions. We introduce the notion of linear relative (shape-preserving) \(n\)-width and find asymptotic estimates of linear relative \(n\)-widths for linear operators preserving \(k\)-monotonicity in the space \(C^k[0,1]\). The estimates show that if linear operator with finite rank \(n\) preserves \(k\)-monotonicity, the degree of simultaneous approximation of derivative of order \(0\leq i\leq k\) of continuous functions by derivatives of this operator cannot be better than \(n^{-2}\) even on the set of algebraic polynomials of degree \(k+2\) (as well as on bounded subsets of Sobolev space \(W^{(k+2)}_\infty [0,1]\)).
For the entire collection see [Zbl 1334.68018].Voronovskaja type theorems for positive linear operators related to squared fundamental functions.https://zbmath.org/1460.410122021-06-15T18:09:00+00:00"Abel, Ulrich"https://zbmath.org/authors/?q=ai:abel.ulrichSummary: For a sequence of positive linear approximation operators defined by means of the squared Bernstein basis polynomials, Favard-Szász-Mirakjan fundamental functions and Baskakov fundamental functions, we derive a complete asymptotic expansion. The initial coeffcients are explicitly calculated. As a special case, we obtain a Voronovskaja type formula. Finally, we introduce two Durrmeyer-type variants and calculate the initial coeffcients of their asymptotic expansions. In each case the trivial class will be determined. Finally, we study the asymptotic properties of operators defined by means of squared Meyer-König and Zeller fundamental functions.
For the entire collection see [Zbl 1453.41001].Uniform convergence rates for the approximated halfspace and projection depth.https://zbmath.org/1460.620602021-06-15T18:09:00+00:00"Nagy, Stanislav"https://zbmath.org/authors/?q=ai:nagy.stanislav"Dyckerhoff, Rainer"https://zbmath.org/authors/?q=ai:dyckerhoff.rainer"Mozharovskyi, Pavlo"https://zbmath.org/authors/?q=ai:mozharovskyi.pavloFor nonparametric statistical analysis of multidimensional data, a general concept that is used is data depth. Two types of depths, namely half-space depth and projection depth, are frequently used. Exact evaluation of data depth is often difficult. Hence the attention in this paper is on procedures to approximate the true depth. Statistical properties of these approximation procedures are studied in this paper. The conditions under which uniform convergence of the approximated depth to its true depth are explored and the convergence rates are evaluated. Under some regularity conditions, it is shown that uniform approximations of the depth are valid and convergence rates can be computed. Guidelines are provided to determine the number \(n\) of directions needed to achieve the desired precision. Two main theorems are established. Explicit and exact rates of convergence are established in a number of distributions including multivariate Gaussian. Explicit guidelines are given for the choice of \(n\), the random sample of directions, to achieve the desired quality of approximation. Situations, where uniform approximation cannot be achieved, are also discussed. Extensions of the concept of projection depth are also explored.
Reviewer: Arakaparampil M. Mathai (Montréal)Analysis of the rate of convergence of fully connected deep neural network regression estimates with smooth activation function.https://zbmath.org/1460.621592021-06-15T18:09:00+00:00"Langer, Sophie"https://zbmath.org/authors/?q=ai:langer.sophieThere is an investigation on regression estimators based on deep neural networks (DNN). In a previous article, [the author and \textit{M. Kohler}, ``On the rate of convergence of fully connected deep neural network regression estimates'', Preprint, \url{arXiv:1908.11133}], neural networks with rectified linear unit (ReLU) activation function have been considered. The question here is, if the same rate of convergence for fully connected deep neural networks regression estimators with smooth activation function -- the sigmoid -- can be achieved. Indeed, the main result of the present paper, proves that under a set of sufficient conditions, the \(L_2\) -errors of least squares neural network regression estimators based on a set of fully connected DNNs with a fixed number of layers, achieve a similar rate of convergence as in the mentioned article.
Reviewer: Claudia Simionescu-Badea (Wien)Quasi-projection operators in weighted \(L_p\) spaces.https://zbmath.org/1460.410152021-06-15T18:09:00+00:00"Kolomoitsev, Yu."https://zbmath.org/authors/?q=ai:kolomoitsev.yurii-s"Skopina, M."https://zbmath.org/authors/?q=ai:skopina.maria-aSignals of various forms, in particular band-limited ones, can be well approximated by expansions that are effectively quasi-interpolants, i.e., scales of shifts of a kernel function times function values. Another case that is specialised like that, is the case of projections, where function evaluations are replaced by inner products with the kernels (\(L^2\) projections); altogether these shapes can be replaced by general quasi-interpolants that are no longer restricted to function values and neither to inner products (they could be averages, derivatives, etc; the useful concept of quasi-interpolation is quite general).
The scaling of the arguments of the kernels are usually by a power of \(h\), but in this article, more general scalings by powers of matrices \(M\) are allowed. In that case, the approximation orders are no longer expressed in powers of (or moduli of smoothness of) \(h\) but in reciprocals of smallest diagonal elements in modulus of the matrices and their powers.
One of the essential points of the approximations of this forms is the choice of approximation spaces, and the authors of this paper present an \textit{Ansatz\/} using \(L^p\)-spaces with weights. The particular result the authors have in mind are error bounds that represent the approximation power of the projections.
The approximation orders depend on conditions on the kernel's Fourier transform at zero; they are of the same type as the celebrated Strang and Fix conditions. This usually means that the kernel's Fourier transform is \(1\) at the origin and certain of its derivatives vanish there, or in the extreme case, that it is identically one almost everywhere in a neighbourhood. (In this paper, these properties are called ``weakly compatible'' and ``strictly compatible'', respectively.)
Depending on these orders of the Strang and Fix type conditions, \(n\) say, the authors derive convergence results of the projectors, written in \(n\)th moduli of continuity of weighted norms of inverse powers of the scaling matrix. The norms are weighted \(L^p\) norms. If the order of the weak compatibility is \(n\), we get order \(n\) (Theorem~9 for example), if there is strict compatibility, it can be any order (Corollary~8 for example).
Reviewer: Martin D. Buhmann (Gießen)