zbMATH — the first resource for mathematics

Geometry Search for the term Geometry in any field. Queries are case-independent.
Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact.
"Topological group" Phrases (multi-words) should be set in "straight quotation marks".
au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted.
Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff.
"Quasi* map*" py: 1989 The resulting documents have publication year 1989.
so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14.
"Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic.
dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles.
py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses).
la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.

a & b logic and
a | b logic or
!ab logic not
abc* right wildcard
"ab c" phrase
(ab c) parentheses
any anywhere an internal document identifier
au author, editor ai internal author identifier
ti title la language
so source ab review, abstract
py publication year rv reviewer
cc MSC code ut uncontrolled term
dt document type (j: journal article; b: book; a: book article)
Heuristics of instability and stabilization in model selection. (English) Zbl 0867.62055
Summary: In model selection, usually a “best” predictor is chosen from a collection $\{\widehat{\mu} (\cdot,s)\}$ of predictors where $\widehat{\mu} (\cdot,s)$ is the minimum least-squares predictor in a collection ${\cal U}_s$ of predictors. Here, $s$ is a complexity parameter; that is, the smaller $s$, the lower dimensional/smoother the models in ${\cal U}_s$. If ${\cal L}$ is the data used to derive the sequence $\{\widehat{\mu} (\cdot,s)\}$, the procedure is called unstable if a small change in ${\cal L}$ can cause large changes in $\{\widehat{\mu} (\cdot,s)\}$. With a crystal ball, one could pick the predictor in $\{\widehat{\mu} (\cdot,s)\}$ having minimum prediction error. Without prescience, one uses test sets, cross-validation and so forth. The difference in prediction error between the crystal ball seletion and the statistician’s choice we call predictive loss. For an unstable procedure the predictive loss is large. This is shown by some analytics in a simple case and by simulation results in a more complex comparison of four different linear regression methods. Unstable procedures can be stabilized by perturbing the data, getting a new predictor sequence $\{\widehat{\mu}' (\cdot,s)\}$ and then averaging over many such predictor sequences.

62H99Multivariate analysis
62J05Linear regression
Full Text: DOI
[1] BREIMAN, L. 1992. The little bootstrap and other methods for dimensionality selection in regression: x-fixed prediction error. J. Amer. Statist. Assoc. 87 738 754. Z. JSTOR: · Zbl 0850.62518 · doi:10.2307/2290212 · http://links.jstor.org/sici?sici=0162-1459%28199209%2987%3A419%3C738%3ATLBAOM%3E2.0.CO%3B2-Q&origin=euclid
[2] BREIMAN, L. 1995. Better subset selection using the non-negative garotte. Technometrics 37 373 384. Z. JSTOR: · Zbl 0862.62059 · doi:10.2307/1269730 · http://links.jstor.org/sici?sici=0040-1706%28199511%2937%3A4%3C373%3ABSRUTN%3E2.0.CO%3B2-3&origin=euclid
[3] BREIMAN, L. 1996a. Stacked regressions. Machine Learning 24 41 64. Z. · Zbl 0849.68104
[4] BREIMAN, L. 1996b. Bagging predictors. Machine Learning 26 123 140. Z. · Zbl 0858.68080
[5] BREIMAN, L. 1996c. Bias, variance and arcing classifiers. Report 460, Dept. Statistics, Univ. California. Z.
[6] BREIMAN, L. and SPECTOR, P. 1992. Submodel selection and evaluation in regression. The random X case. Internat. Statist. Rev. 60 291 319. Z.
[7] WOLPERT, D. 1992. Stacked generalization. Neural Networks 5 241 259.