The theory of unbiased estimation. (English) Zbl 0063.01891

Summary: Let \(F(P)\) be a real valued function defined on a subset \(\mathcal{D}\) of the set \(\mathcal{D}^\ast\) of all probability distributions on the real line. A function \(f\) of \(n\) real variables is an unbiased estimate of \(F\) if for every system, \(X_1, \dots, X_n\), of independent random variables with the common distribution \(P\), the expectation of \(f(X_1 \cdots, X_n)\) exists and equals \(F(P)\), for all \(P\) in \(\mathcal{D}\). A necessary and sufficient condition for the existence of an unbiased estimate is given (Theorem 1), and the way in which this condition applies to the moments of a distribution is described (Theorem 2). Under the assumptions that this condition is satisfied and that \(\mathcal{D}\) contains all purely discontinuous distributions it is shown that there is a unique symmetric unbiased estimate (Theorem 3); the most general (non symmetric) unbiased estimates are described (Theorem 4); and it is proved that among them the symmetric one is best in the sense of having the least variance (Theorem 5). Thus the classical estimates of the mean and the variance are justified from a new point of view, and also, from the theory, computable estimates of all higher moments are easily derived. It is interesting to note that for \(n\) greater than 3 neither the sample \(n\)th moment about the sample mean nor any constant multiple thereof is an unbiased estimate of the \(n\)th moment about the mean. Attention is called to a paradoxical situation arising in estimating such non linear functions as the square of the first moment.


62-XX Statistics
Full Text: DOI