Springer Series in Statistics. New York, NY: Springer. 338 p. DM 84.00; öS 613.20; sFr 74.00 (1996).
The basic objective of statistics is extracting all the information from the data to deduce properties of the population that generated the data. In the classical statistical analysis, a parametric model is assumed for the underlying population. A weakness of this classical approach is that if the assumed model is not the correct one, inferences can be worse than useless, leading to grossly misleading interpretations of the data. An opposite extreme to the parametric approach is making no assumptions about the underlying population that generated the data. No true data summary, however, can be obtained from this purely nonparametric approach. Smoothing methods provide a bridge between a purely nonparametric approach and a parametric approach. There are two important ways that smoothing methods can aid in data analysis: By being able to extract more information from the data than is possible purely nonparametrically, as long as the (weak) assumption of smoothness is reasonable; and by being able to free oneself from the “parametric straitjacket” of rigid distributional assumptions, thereby providing analyses that are both flexible and robust.
This book provides a general discussion of smoothing methods in statistics, with particular emphasis on the actual applications of such methods to real data problems. It starts in Chapter 2 with two simple methods of estimating a univariate density: The histogram and the frequency polygon. Some most important issues in the methodology of smoothing, such as the evaluation of smooth estimates and the selection of smoothing parameters, are illustrated and discussed through these two simple methods. More advanced univariate smoothing methods, such as kernel, local likelihood, roughness penalty and spline-based methods, are introduced in Chapter 3. Chapter 4 presents some smoothing methods for multivariate densities. The smoothing methods for a nonparametric regression model, an alternative to the widely used linear regression model, are discussed and compared in Chapter 5. These include the kernel, local polynomial, and spline estimators. Chapter 6 deals with the smoothing for ordered categorical data. Applications of the smoothing methods to some other areas, such as discriminant analysis, goodness-of-fit tests and bootstrap, are the subject in Chapter 7.
The emphasis of this book is on the practical applications of the smoothing methods. Every method presented is illustrated by some real examples. Important issues related to the applications of these methods are discussed in great detail. The implementation of the methods and the availability of software are described in details in the section on “Computational Issues” at the end of each chapter. There are also exercises at the end of each chapter, which will help the readers to better digest the methods presented. Even there is a World Wide Web site announced which allows access to the data sets used in the book, updated information on the computational issues discussed in the book, an errata list, and a list of updated references. Those who are interested in the advanced theory behind the smoothing methods, which is a very active area in current statistical research, are, however, required to refer to the original research papers listed in the section “Background material” of the book.