Wavelet shrinkage: Asymptopia?

*(English)*Zbl 0827.62035Summary: Much recent effort has sought asymptotically minimax methods for recovering infinite dimensional objects – curves, densities, spectral densities, images – from noisy data. A now rich and complex body of work develops nearly or exactly minimax estimators for an array of interesting problems. Unfortunately, the results have rarely moved into practice, for a variety of reasons – among them being similarity to known methods, computational intractability and lack of spatial adaptivity.

We discuss a method for curve estimation based on \(n\) noisy data: translate the empirical wavelet coefficients towards the origin by an amount \(\sqrt {(2\log n)\sigma}/ \sqrt {n}\). The proposal differs from those in current use, is computationally pratical and is spatially adaptive; it thus avoids several of the previous objections. Further, the method is nearly minimax both for a wide variety of loss functions – pointwise error, global error measured in \(L^p\)-norms, pointwise and global error in estimation of derivatives – and for a wide range of smoothness classes, including standard Hölder and Sobolev classes, and bounded variation.

This is a much broader near optimality than anything previously proposed: we draw loose parallels with near optimality in robustness and also with the broad near eigenfunction properties of wavelets themselves. Finally, the theory underlying the method is interesting, as it exploits a correspondence between statistical questions and questions of optimal recovery and information-based complexity.

We discuss a method for curve estimation based on \(n\) noisy data: translate the empirical wavelet coefficients towards the origin by an amount \(\sqrt {(2\log n)\sigma}/ \sqrt {n}\). The proposal differs from those in current use, is computationally pratical and is spatially adaptive; it thus avoids several of the previous objections. Further, the method is nearly minimax both for a wide variety of loss functions – pointwise error, global error measured in \(L^p\)-norms, pointwise and global error in estimation of derivatives – and for a wide range of smoothness classes, including standard Hölder and Sobolev classes, and bounded variation.

This is a much broader near optimality than anything previously proposed: we draw loose parallels with near optimality in robustness and also with the broad near eigenfunction properties of wavelets themselves. Finally, the theory underlying the method is interesting, as it exploits a correspondence between statistical questions and questions of optimal recovery and information-based complexity.