×

zbMATH — the first resource for mathematics

Theory of point estimation. 2nd ed. (English) Zbl 0916.62017
Springer Texts in Statistics. New York, NY: Springer. xxvi, 589 p. (1998).
The “Theory of point estimation” was first published in 1983, see the review Zbl 0522.62020. It has been used as a graduate level textbook on estimation theory in a number of universities. However, much new work has been done in this field as a result of which an updated edition of the original book became necessary. The current (second) edition has significant amount of new material and includes many more references. This book and its companion, the second edition of “Testing statistical hypotheses” (1986; Zbl 0608.62020), provide a unified account of classical statistical inference. A thorough knowledge of calculus and linear algebra is an essential prerequisite. Some knowledge of measure theory will be helpful.
The book has six chapters, a comprehensive list of references, an author index, and a subject index. Apart from prefaces to the first and second editions, the front matter of the book also includes a complete list of examples, tables, and figures, as well as a table explaining the notation used in the book. The first four chapters discuss exact theory of inference in small samples. The final two chapters deal with large-sample theory. Only point estimation in Euclidean spaces is covered in the book. As a result, estimation in sequential analysis, stochastic processes, and function spaces is not covered.
Chapter 1 sets up the ground work for later material. It begins with an overview of three approaches to analyzing data – data analysis, classical inference and decision theory, and Bayesian analysis. The estimation problem is formulated using loss functions and risk functions. There is a brief but adequate overview of a variety of background material including measure theory, probability theory, convergence theorems, transformation groups, exponential families, sufficient statistics, completeness, and convex loss functions.
Chapter 2 focuses on unbiased estimators. Locally minimum variance unbiased estimators and uniformly minimum variance unbiased (UMVU) estimators are introduced. Methods for finding UMVU estimators are discussed. Examples of UMVU estimators are provided both in the context of continuous distributions and discrete distributions. UMVU estimation based on observations from well known distributions is discussed in detail. There is also a section on UMVU estimation for nonparametric families. Various information inequalities are introduced and discussed in detail.
Chapter 3 is concerned with estimators that respect the inherent symmetry that may be present in an estimation problem. Such estimators are termed equivariant estimators. This chapter introduces the principle of equivariance and provides many examples of its application. Detailed analyses are provided for special cases of normal fixed, random, and mixed models, exponential mixed models, and finite population models.
Chapter 4 treats the question of optimality of estimators under much weaker requirements by considering average risk optimality rather than optimality for every value of the parameter. This approach leads to Bayes estimators. The authors provide a very nice discussion of different viewpoints one could take with respect to the Bayes approach. A number of examples are presented to illustrate the ideas. Concepts such as hierarchical Bayes models, conjugate priors, equivariant Bayes estimators and empirical Bayes estimators are introduced.
Chapter 5 discusses admissibility and minimaxity of estimators. A section is devoted to admissibility and minimaxity in exponential families, and another section to group families. The authors introduce here the problem of estimating a vector parameter. Obvious extensions of earlier theory for the case of several parameters are presented. Shrinkage estimators and their extensions are considered. The chapter concludes with a treatment of complete classes.
Chapter 6 deals with questions concerning performance of estimators in large samples. Two approaches are presented – (a) the limiting moment approach and (b) the asymptotic distribution approach. The idea of asymptotic efficiency is introduced and illustrated via several examples. The existence of asymptotically efficient estimators and methods for finding such estimators, including the maximum likelihood approach, are discussed both for the scalar-parameter case and for the vector-parameter case. Applications of the techniques are presented. The chapter ends with a treatment of the asymptotic efficiency of Bayes estimators.
Each chapter concludes with a large collection of problems and authors’ “notes” which provide bibliographic and historical material as well as introductions to recent developments in point estimation not discussed in that chapter. Contentwise, the major additions to the first edition appear to be in the area of Bayesian inference, information inequalities, and simultaneous and shrinkage estimation. This second edition of the “Theory of point estimation” is not only highly suited as a textbook for graduate level classes in statistical inference, but also should be a useful reference for all practicing statisticians.

MSC:
62F10 Point estimation
62-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to statistics
62F12 Asymptotic properties of parametric estimators
62F15 Bayesian inference
PDF BibTeX XML Cite