×

A geometric interpretation of inferences based on ranks in the linear model. (English) Zbl 0537.62051

Let \(Y=X\beta +e=\theta +e\) be a linear regression model, where X is an \(n\times r\) matrix, \(\beta\) is an unknown \(r\times 1\) vector, e is an \(n\times 1\) vector of i.i.d. random errors, symmetrically distributed about 0 with density function, 0\(\in \Omega\) be the subspace of \(R^ n\) which is spanned by the columns of X, dim \(\Omega =p\leq r\). An estimate of \(\theta\) is defined as a point \({\hat \theta}\) in \(\Omega\) closest to y: \(\| y-{\hat \theta}\| =\min \| y-\theta \|\), \(\theta\in \Omega\), with \(\| \cdot \|\) any norm in \(R^ n\). For the Euclidean norm \(\| \cdot \|_{LS} {\hat \theta}\) is the least squares projection of y into \(\Omega\).
Let \(\phi\) (u), \(0<u<1\), be a nonnegative, nondecreasing square integrable function, \(\int \phi^ 2=1\), \(a(i)=\phi(i/(n+1))\). If \(R| y_ i|\) denotes the rank of \(| y_ i|\) among \(| y_ 1|,...,| y_ n|\), then \(\| y\|_{\phi}=<a(R| y|),| y|>\) is a norm in \(R^ n\). For the norm \(\| \cdot \|_{\phi} {\hat \theta}_ R\) is an R-prediction of y. The tests for linear hypotheses based on these estimations are considered.
Reviewer: N.Leonenko

MSC:

62J05 Linear regression; mixed models
62F03 Parametric hypothesis testing
65C05 Monte Carlo methods
PDFBibTeX XMLCite
Full Text: DOI