##
**Estimation of the last mean of a monotone sequence.**
*(English)*
Zbl 0224.62006

Let \(X_1\), \(i= 1,2\), be independent normal random variables with means \(\theta_i\), and known variances. Without loss of generality we let the variance of \(X_1\) be \(\tau\) and the variance of \(X_2\) be \(1\). Assume \(X_2\ge X_1\), and consider the problem of estimating \(X_2\) with respect to a squared error loss function. Let \(\delta(X_2)\) be any estimator based on \(X_2\) alone. Consider only those \(\delta(X_2)\) which are admissible for estimating \(X_2\) when \(X_1\), is not observed. The following results are obtained.

(1) If the risk of \(\delta(X_2)\) is bounded, then \(\delta(X_2)\) is inadmissible.

This result can be generalized in a few directions. In fact if \(\theta_i\) are translation parameters of identical symmetric densities, then for any nonnegative strictly convex loss function \(W(\cdot)\), with a minimum at 0, \(X_2\) is an inadmissible estimator. Suitable generalizations for arbitrary sample sizes are given. Another generalization is that if \(C\) is any positive constant, then \(X_2 \pm C\) is inadmissible as a confidence interval of \(\theta_2\).

(2) Let \(U_\tau\) be the positive solution to the equation \(a^2 + (\tau + 1)a - \tau = 0\). The quantity \(U_\tau\) will be such that, \(0\le U_\tau <1\). Then the estimators \(aX_2\), for \(0\le a <U_\tau, are admissible. It will be shown that no \(\delta(X_2)\), such that \(\delta(X_2)\) is unbounded below, can be generalized Bayes. Thus this result provides an example of an estimator which is not generalized Bayes, but which is admissible for the squared error loss function. The results above are also true for estimating the largest of \(k\) ordered means with known order, in the case of equal variances. It is interesting for some \(a>0\), \(aX_k\) is admissible, regardless of the size of \(k\). The proof of admissibility of \(aX_2\) uses the methods of \textit{C. R. Blyth} [Ann. Math. Stat. 22, 22--42 (1951; Zbl 0042.38303)] and \textit{R. H. Farrell} [ibid. 39, 23--28 (1968; Zbl 0187.15503)]. (3) Consider the analogue of the Pitman estimator. That is, the estimator which is generalized Bayes with respect to the uniform prior on the space \(\theta_2\ge \theta_1\). We prove that this estimator is admissible and minimax.

(1) If the risk of \(\delta(X_2)\) is bounded, then \(\delta(X_2)\) is inadmissible.

This result can be generalized in a few directions. In fact if \(\theta_i\) are translation parameters of identical symmetric densities, then for any nonnegative strictly convex loss function \(W(\cdot)\), with a minimum at 0, \(X_2\) is an inadmissible estimator. Suitable generalizations for arbitrary sample sizes are given. Another generalization is that if \(C\) is any positive constant, then \(X_2 \pm C\) is inadmissible as a confidence interval of \(\theta_2\).

(2) Let \(U_\tau\) be the positive solution to the equation \(a^2 + (\tau + 1)a - \tau = 0\). The quantity \(U_\tau\) will be such that, \(0\le U_\tau <1\). Then the estimators \(aX_2\), for \(0\le a <U_\tau, are admissible. It will be shown that no \(\delta(X_2)\), such that \(\delta(X_2)\) is unbounded below, can be generalized Bayes. Thus this result provides an example of an estimator which is not generalized Bayes, but which is admissible for the squared error loss function. The results above are also true for estimating the largest of \(k\) ordered means with known order, in the case of equal variances. It is interesting for some \(a>0\), \(aX_k\) is admissible, regardless of the size of \(k\). The proof of admissibility of \(aX_2\) uses the methods of \textit{C. R. Blyth} [Ann. Math. Stat. 22, 22--42 (1951; Zbl 0042.38303)] and \textit{R. H. Farrell} [ibid. 39, 23--28 (1968; Zbl 0187.15503)]. (3) Consider the analogue of the Pitman estimator. That is, the estimator which is generalized Bayes with respect to the uniform prior on the space \(\theta_2\ge \theta_1\). We prove that this estimator is admissible and minimax.

Reviewer: Arthur Cohen

### MSC:

62C15 | Admissibility in statistical decision theory |