Mathematical statistics.

*(English)*Zbl 0935.62004
Springer Texts in Statistics. New York, NY: Springer. xiv, 529 p. (1999).

This book is intended as a textbook for a two semester course in mathematical statistics suitable for preparing graduate students wishing to pursue a doctoral degree in statistics. The treatment is mathematically rigorous and assumes that students are well versed in advanced calculus. Prior exposure to mathematical analysis including measure theory will facilitate understanding of the material.

The book is organized in seven chapters. The first chapter lays the groundwork by providing a quick but comprehensive overview of measure-theoretic probability and the associated tools needed for a rigorous treatment of mathematical statistics. Topics covered include \(\sigma\)-fields, measures, Radon-Nikodym derivative, integration, distributions, densities, moment generating functions, characteristic functions, conditional expectation, independence, conditional distributions, modes of convergence of random variables and random vectors, asymptotic behavior, the law of large numbers and the central limit theorem. Not all results are stated with proofs but appropriate references are given.

Chapter 2 introduces the fundamental concepts of statistical inference. The chapter begins with a discussion of populations, samples, and statistical models. The distinction between parametric models and nonparametric models is explained. Location-scale families and exponential families of distributions are defined and examples are given. A brief but adequate treatment of completeness and sufficiency is provided. The concept of ancillarity is introduced. There is a section on statistical decision theory, decision rules, loss and risk functions, admissibility and optimality. This is followed by a section on statistical inference with a brief introduction to point estimation, hypothesis tests, and confidence sets. Criteria for judging the goodness of an inference procedure are introduced. These include bias, variance, mean square error, consistency and other asymptotic criteria.

Chapter 3 is devoted to the topic of unbiased and asymptotically unbiased estimation. Standard results concerning uniformly minimum variance unbiased estimation are discussed first. \(U\)-statistics and their properties are introduced in the context of nonparametric models. Least squares estimators, their robustness, and asymptotic behavior are considered. A section is devoted to unbiased estimators in survey sampling and the Horvitz-Thompson estimator. The chapter concludes with a treatment of method of moments estimation, asymptotically unbiased estimators and \(V\)-statistics, and weighted least squares estimation.

Chapter 4 discusses the theory of point estimation under parametric models. The chapter begins with the Bayesian approach, empirical and hierarchical Bayes methods, Bayes rules, and Bayes estimators. A brief discussion of the Markov Chain Monte Carlo, Gibbs Sampler, and the Metropolis algorithm is provided. The concept of invariance is introduced and the best invariant estimation problem for location-scale families is addressed. Minimax estimators are briefly discussed. The method of maximum likelihood is introduced and illustrated using the framework of generalized linear models. Criteria such as asymptotic efficiency and asymptotic optimality are discussed.

Chapter 5 is devoted to the problem of point estimation in nonparametric problems. Empirical cumulative distribution functions are considered in the i.i.d setting and certain convergence results are established. Empirical likelihoods and density estimation are considered next and kernel density estimators are introduced. Statistical functionals and the concept of Gateaux differentiability are explained. Robust estimators such as the \(L\)-, \(M\)-, and \(R\)-estimators are discussed. Generalized estimating equations (GEE) and corresponding GEE estimators are discussed along with their asymptotic properties. The chapter concludes with a brief discussion of the jackknife and the bootstrap.

Chapter 6 covers the general theory of hypothesis tests. The discussion begins with the celebrated Neyman-Pearson lemma and Uniformly Most Powerful (UMP) tests. This is followed by a discussion of UMP tests for two-sided hypotheses and UMP unbiased tests. Invariance considerations then lead to a discussion of UMP invariant tests in normal linear models. Other methods for deriving tests such as the likelihood ratio, chi-square tests, and Bayes tests are discussed. For nonparametric settings there is a discussion of the classical procedures such as the sign test, permutation tests, and rank tests. The Kolmogorov-Smirnov tests and the CramĂ©r-Von Mises tests for goodness of fit are discussed. The chapter concludes with a brief treatment of empirical likelihood ratio tests and a method for constructing asymptotically size \(\alpha\) tests.

Chapter 7 is about interval estimation and confidence sets. Basic methods for constructing confidence sets are first introduced. This includes the pivotal quantity methods, inversion of an appropriate test, and Bayesian methods. The problem of constructing prediction intervals and prediction sets is also considered. Properties of confidence sets are then discussed which include a discussion of confidence interval lengths, uniformly most accurate and uniformly most accurate unbiased confidence sets, and invariant confidence sets. The discussion then turns to asymptotic confidence sets, asymptotically pivotal quantities, and confidence sets based on likelihoods. Bootstrap confidence interval methods are introduced and the idea of Bootstrap Calibration of confidence interval is explained. Simultaneous interval estimation is introduced and illustrated in the context of analysis of variance models with a discussion of the Scheffe method, the Bonferroni method, and the Tukey method. The chapter concludes with a discussion of confidence bands for a cumulative distribution function.

Concepts in each chapter are adequately illustrated using appropriate examples. Numerous exercises are provided at the end of each chapter. Appendix A contains a convenient list of abbreviations used in the book. Appendix B contains a summary of the notation used in the book. A list of references is provided at the end of the book with a prelude which tells the reader where to look for more information on specific topics. A fairly extensive bibliography is included. An author index and a subject index are also provided.

Although much of the book is devoted to the treatment of classical results in statistical inference, it is important to observe that a significant portion of the book is also devoted to the treatment of more modern topics such as Markov Chain Monte Carlo, Robust Estimation, Generalized Linear Models, Quasi-likelihoods, Empirical Likelihoods, Statistical Functionals, Generalized Estimating Equations, the Jackknife and the Bootstrap. The writing style is appealing and makes the material reader-friendly. Not only is this an excellent text book for a graduate course in mathematical statistics, but it is also a valuable reference for practicing statisticians.

The book is organized in seven chapters. The first chapter lays the groundwork by providing a quick but comprehensive overview of measure-theoretic probability and the associated tools needed for a rigorous treatment of mathematical statistics. Topics covered include \(\sigma\)-fields, measures, Radon-Nikodym derivative, integration, distributions, densities, moment generating functions, characteristic functions, conditional expectation, independence, conditional distributions, modes of convergence of random variables and random vectors, asymptotic behavior, the law of large numbers and the central limit theorem. Not all results are stated with proofs but appropriate references are given.

Chapter 2 introduces the fundamental concepts of statistical inference. The chapter begins with a discussion of populations, samples, and statistical models. The distinction between parametric models and nonparametric models is explained. Location-scale families and exponential families of distributions are defined and examples are given. A brief but adequate treatment of completeness and sufficiency is provided. The concept of ancillarity is introduced. There is a section on statistical decision theory, decision rules, loss and risk functions, admissibility and optimality. This is followed by a section on statistical inference with a brief introduction to point estimation, hypothesis tests, and confidence sets. Criteria for judging the goodness of an inference procedure are introduced. These include bias, variance, mean square error, consistency and other asymptotic criteria.

Chapter 3 is devoted to the topic of unbiased and asymptotically unbiased estimation. Standard results concerning uniformly minimum variance unbiased estimation are discussed first. \(U\)-statistics and their properties are introduced in the context of nonparametric models. Least squares estimators, their robustness, and asymptotic behavior are considered. A section is devoted to unbiased estimators in survey sampling and the Horvitz-Thompson estimator. The chapter concludes with a treatment of method of moments estimation, asymptotically unbiased estimators and \(V\)-statistics, and weighted least squares estimation.

Chapter 4 discusses the theory of point estimation under parametric models. The chapter begins with the Bayesian approach, empirical and hierarchical Bayes methods, Bayes rules, and Bayes estimators. A brief discussion of the Markov Chain Monte Carlo, Gibbs Sampler, and the Metropolis algorithm is provided. The concept of invariance is introduced and the best invariant estimation problem for location-scale families is addressed. Minimax estimators are briefly discussed. The method of maximum likelihood is introduced and illustrated using the framework of generalized linear models. Criteria such as asymptotic efficiency and asymptotic optimality are discussed.

Chapter 5 is devoted to the problem of point estimation in nonparametric problems. Empirical cumulative distribution functions are considered in the i.i.d setting and certain convergence results are established. Empirical likelihoods and density estimation are considered next and kernel density estimators are introduced. Statistical functionals and the concept of Gateaux differentiability are explained. Robust estimators such as the \(L\)-, \(M\)-, and \(R\)-estimators are discussed. Generalized estimating equations (GEE) and corresponding GEE estimators are discussed along with their asymptotic properties. The chapter concludes with a brief discussion of the jackknife and the bootstrap.

Chapter 6 covers the general theory of hypothesis tests. The discussion begins with the celebrated Neyman-Pearson lemma and Uniformly Most Powerful (UMP) tests. This is followed by a discussion of UMP tests for two-sided hypotheses and UMP unbiased tests. Invariance considerations then lead to a discussion of UMP invariant tests in normal linear models. Other methods for deriving tests such as the likelihood ratio, chi-square tests, and Bayes tests are discussed. For nonparametric settings there is a discussion of the classical procedures such as the sign test, permutation tests, and rank tests. The Kolmogorov-Smirnov tests and the CramĂ©r-Von Mises tests for goodness of fit are discussed. The chapter concludes with a brief treatment of empirical likelihood ratio tests and a method for constructing asymptotically size \(\alpha\) tests.

Chapter 7 is about interval estimation and confidence sets. Basic methods for constructing confidence sets are first introduced. This includes the pivotal quantity methods, inversion of an appropriate test, and Bayesian methods. The problem of constructing prediction intervals and prediction sets is also considered. Properties of confidence sets are then discussed which include a discussion of confidence interval lengths, uniformly most accurate and uniformly most accurate unbiased confidence sets, and invariant confidence sets. The discussion then turns to asymptotic confidence sets, asymptotically pivotal quantities, and confidence sets based on likelihoods. Bootstrap confidence interval methods are introduced and the idea of Bootstrap Calibration of confidence interval is explained. Simultaneous interval estimation is introduced and illustrated in the context of analysis of variance models with a discussion of the Scheffe method, the Bonferroni method, and the Tukey method. The chapter concludes with a discussion of confidence bands for a cumulative distribution function.

Concepts in each chapter are adequately illustrated using appropriate examples. Numerous exercises are provided at the end of each chapter. Appendix A contains a convenient list of abbreviations used in the book. Appendix B contains a summary of the notation used in the book. A list of references is provided at the end of the book with a prelude which tells the reader where to look for more information on specific topics. A fairly extensive bibliography is included. An author index and a subject index are also provided.

Although much of the book is devoted to the treatment of classical results in statistical inference, it is important to observe that a significant portion of the book is also devoted to the treatment of more modern topics such as Markov Chain Monte Carlo, Robust Estimation, Generalized Linear Models, Quasi-likelihoods, Empirical Likelihoods, Statistical Functionals, Generalized Estimating Equations, the Jackknife and the Bootstrap. The writing style is appealing and makes the material reader-friendly. Not only is this an excellent text book for a graduate course in mathematical statistics, but it is also a valuable reference for practicing statisticians.

Reviewer: H.Iyer (Fort Collins)

##### MSC:

62-01 | Introductory exposition (textbooks, tutorial papers, etc.) pertaining to statistics |

62-02 | Research exposition (monographs, survey articles) pertaining to statistics |