Power approximations to multinomial tests of fit. (English) Zbl 0683.62027

Multinomial tests for the fit of iid observations \(X_ 1,...,X_ n\) to a specified distribution F are based on the counts \(N_ i\) of observations falling in k cells \(E_ 1,...,E_ k\) that partition the range of the \(X_ j\). The earliest such test is based on the Pearson (1900) chi-squared statistic: \[ X^ 2=\sum^{k}_{i=1}(N_ i-np_ i)^ 2/np_ i, \] where \(p_ i=P_ F\) \((X_ j\) in \(E_ i)\) are the cell probabilities under the null hypothesis. A common competing test is the likelihood ratio test based on \[ LR=2\sum^{k}_{i=1}N_ i\log (N_ i/np_ i). \] N. Cressie and T. R. C. Read [J. R. Stat. Soc., Ser. B 46, 440-464 (1984; Zbl 0571.62017)] introduced a class of multinomial goodness-of-fit statistics, \(R^{\lambda}\), based on measures of the divergence between discrete distributions. This class includes both \(X^ 2\) (when \(\lambda =1)\) and LR (when \(\lambda =0)\). All of the \(R^{\lambda}\) have the same chi-squared limiting null distribution. The power of the commonly used members of the class is usually approximated from a noncentral chi-squared distribution that is also the same for all \(\lambda\). We propose new approximations to the power that vary with the statistic chosen. Both the computation and results on asymptotic error rates suggest that the new approximations are greatly superior to the traditional power approximation for statistics \(R^{\lambda}\) other than the Pearson \(X^ 2\).


62G10 Nonparametric hypothesis testing
62E20 Asymptotic distribution theory in statistics


Zbl 0571.62017
Full Text: DOI