##
**From model selection to adaptive estimation.**
*(English)*
Zbl 0920.62042

Pollard, David (ed.) et al., Festschrift for Lucien Le Cam: research papers in probability and statistics. New York, NY: Springer. 55-87 (1997).

The objective of the paper under review is to illustrate by a few theorems and applications, mainly directed towards adaptive estimation in Besov spaces, the power and versatility of the method of penalized minimum contrast estimation on sieves. The authors consider linear sieves and the particular contrast that defines projection estimators for density estimation.

Let \(S_m\) be a collection of linear sieves with respective dimensions \(D_m\) and suitable properties, and \(s_m\) be the best approximant of an unknown density \(s\) (from \({\mathbf L}^2(\mu)\) with the norm \(\|\;\|)\) in \(S_m\), based on \(n\) given observations. It is shown that if a single sieve \(S\) is replaced by \(S_m\), and a penalty function has the form \(L(m)D_m/n\) (where \(L(m)\) is either uniformly bounded or possibly of order \(\log n)\), then one gets a risk which, up to a multiplicative constant, realizes the best trade-off between \(\| s-s_m\|^2\) and \(L(m)D_m/n\).

The authors show that some methods of adaptive density estimation (like the unbiased cross validation and the hard thresholding of wavelet empirical coefficients) can be viewed as special instances of penalized projection estimators. To emphasize the flexibility and potential of the methods of penalization, the authors discuss various families of sieves and penalties and propose some new adaptive estimators.

For the entire collection see [Zbl 0861.00032].

Let \(S_m\) be a collection of linear sieves with respective dimensions \(D_m\) and suitable properties, and \(s_m\) be the best approximant of an unknown density \(s\) (from \({\mathbf L}^2(\mu)\) with the norm \(\|\;\|)\) in \(S_m\), based on \(n\) given observations. It is shown that if a single sieve \(S\) is replaced by \(S_m\), and a penalty function has the form \(L(m)D_m/n\) (where \(L(m)\) is either uniformly bounded or possibly of order \(\log n)\), then one gets a risk which, up to a multiplicative constant, realizes the best trade-off between \(\| s-s_m\|^2\) and \(L(m)D_m/n\).

The authors show that some methods of adaptive density estimation (like the unbiased cross validation and the hard thresholding of wavelet empirical coefficients) can be viewed as special instances of penalized projection estimators. To emphasize the flexibility and potential of the methods of penalization, the authors discuss various families of sieves and penalties and propose some new adaptive estimators.

For the entire collection see [Zbl 0861.00032].

Reviewer: Joseph Melamed (Los Angeles)