zbMATH — the first resource for mathematics

Examples
Geometry Search for the term Geometry in any field. Queries are case-independent.
Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact.
"Topological group" Phrases (multi-words) should be set in "straight quotation marks".
au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted.
Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff.
"Quasi* map*" py: 1989 The resulting documents have publication year 1989.
so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14.
"Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic.
dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles.
py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses).
la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.

Operators
a & b logic and
a | b logic or
!ab logic not
abc* right wildcard
"ab c" phrase
(ab c) parentheses
Fields
any anywhere an internal document identifier
au author, editor ai internal author identifier
ti title la language
so source ab review, abstract
py publication year rv reviewer
cc MSC code ut uncontrolled term
dt document type (j: journal article; b: book; a: book article)
Smoothing methods in statistics. (English) Zbl 0859.62035
Springer Series in Statistics. New York, NY: Springer. 338 p. DM 84.00; öS 613.20; sFr 74.00 (1996).

The basic objective of statistics is extracting all the information from the data to deduce properties of the population that generated the data. In the classical statistical analysis, a parametric model is assumed for the underlying population. A weakness of this classical approach is that if the assumed model is not the correct one, inferences can be worse than useless, leading to grossly misleading interpretations of the data. An opposite extreme to the parametric approach is making no assumptions about the underlying population that generated the data. No true data summary, however, can be obtained from this purely nonparametric approach. Smoothing methods provide a bridge between a purely nonparametric approach and a parametric approach. There are two important ways that smoothing methods can aid in data analysis: By being able to extract more information from the data than is possible purely nonparametrically, as long as the (weak) assumption of smoothness is reasonable; and by being able to free oneself from the “parametric straitjacket” of rigid distributional assumptions, thereby providing analyses that are both flexible and robust.

This book provides a general discussion of smoothing methods in statistics, with particular emphasis on the actual applications of such methods to real data problems. It starts in Chapter 2 with two simple methods of estimating a univariate density: The histogram and the frequency polygon. Some most important issues in the methodology of smoothing, such as the evaluation of smooth estimates and the selection of smoothing parameters, are illustrated and discussed through these two simple methods. More advanced univariate smoothing methods, such as kernel, local likelihood, roughness penalty and spline-based methods, are introduced in Chapter 3. Chapter 4 presents some smoothing methods for multivariate densities. The smoothing methods for a nonparametric regression model, an alternative to the widely used linear regression model, are discussed and compared in Chapter 5. These include the kernel, local polynomial, and spline estimators. Chapter 6 deals with the smoothing for ordered categorical data. Applications of the smoothing methods to some other areas, such as discriminant analysis, goodness-of-fit tests and bootstrap, are the subject in Chapter 7.

The emphasis of this book is on the practical applications of the smoothing methods. Every method presented is illustrated by some real examples. Important issues related to the applications of these methods are discussed in great detail. The implementation of the methods and the availability of software are described in details in the section on “Computational Issues” at the end of each chapter. There are also exercises at the end of each chapter, which will help the readers to better digest the methods presented. Even there is a World Wide Web site announced which allows access to the data sets used in the book, updated information on the computational issues discussed in the book, an errata list, and a list of updated references. Those who are interested in the advanced theory behind the smoothing methods, which is a very active area in current statistical research, are, however, required to refer to the original research papers listed in the section “Background material” of the book.

Reviewer: D.Tu (Kingston)
MSC:
62G07Density estimation
62-01Textbooks (statistics)
62-02Research monographs (statistics)
62P99Applications of statistics