Golub, G. H.; Hoffman, Alan; Stewart, G. W. A generalization of the Eckart-Young-Mirsky matrix approximation theorem. (English) Zbl 0623.15020 Linear Algebra Appl. 88-89, 317-327 (1987). Let X be an \(n\times p\) matrix with \(n\geq p\) and let \(\| \cdot \|\) be a unitarily invariant matrix norm. Let \(X=(X_ 1,X_ 2)\) where \(X_ 1\) has k columns. The problem considered in this paper is: find a matrix \(\hat X{}_ 2\) such that \(rank(X_ 1,\hat X_ 2)\leq r\) and \[ \| (X_ 1,\hat X_ 2)-(X_ 1,X_ 2)\| =\inf_{rank(X_ 1,\bar X_ 2)\leq \quad r}\| (X_ 1,\bar X_ 2)-(X_ 1,X_ 2)\|. \] This problem was solved by C. Eckart and G. Young [The approximation of one matrix by another of lower rank, Psychometrika 1, 211-218 (1936)] in the case \(k=0\), for the Frobenius norm. Let \(H_ r(X)\) denote the Eckart-Young solution (with \(H_ r(X)=X\) if \(r>p)\). The authors prove the following: Theorem. Let \(X=(X_ 1,X_ 2)\) where \(X_ 1\) has k columns ad let \(\ell =rank X_ 1\). Let P denote the orthogonal projection onto the column space of X and \(P^{\perp}\) the orthogonal projection onto its orthogonal complement. If \(\ell \leq r\) then the matrix \(\hat X{}_ 2=PX_ 2+H_{r-\ell}(P^{\perp}X_ 2)\) is a solution of the problem above. A number of consequences of this theorem are considered and, in particular, applications to multiple correlations, variance inflation factors and total least squares are given. Reviewer: F.J.Gaines Cited in 2 ReviewsCited in 45 Documents MSC: 15A60 Norms of matrices, numerical range, applications of functional analysis to matrix theory 15A24 Matrix equations and identities 62H20 Measures of association (correlation, canonical correlation, etc.) Keywords:matrix approximation; unitarily invariant matrix norm; rank; singular values; Frobenius norm; multiple correlations; variance inflation factors; total least squares PDF BibTeX XML Cite \textit{G. H. Golub} et al., Linear Algebra Appl. 88--89, 317--327 (1987; Zbl 0623.15020) Full Text: DOI References: [1] Eckart, G.; Young, G., The approximation of one matrix by another of lower rank, Psychometrika, 1, 211-218 (1936) · JFM 62.1075.02 [2] Gauss, C. F., Theoria combinationis observationum erroribus minimus obnoxiae, (Werke IV (1821), Koniglichen Gessellschaft der Wissenschaften zu Göttingen), 1-26 [3] Golub, G. H.; Van Loan, C., An analysis of the total least squares problem, SIAM. Numer. Anal., 17, 883-893 (1980) · Zbl 0468.65011 [4] Golub, G. H.; Van Loan, C., Matrix Computations (1983), Johns Hopkins: Johns Hopkins Baltimore · Zbl 0559.65011 [5] Mirsky, L., Symmetric gauge functions and unitarily invariant norms, Quart. J. Math. Oxford, 11, 50-59 (1960) · Zbl 0105.01101 [6] Ouellette, D. V., Schur complement and statistics, Linear Algebra Appl., 36, 187-295 (1981) · Zbl 0455.15012 [7] Stewart, G. W., A nonlinear version of Gauss’s minimum variance theorem with applications to an errors-in-the-variables model, (Computer Science Technical Report TR-1263 (1983), Univ. of Maryland) [9] Webster, J.; Gunst, R.; Mason, R., Latent root regression analysis, Technometrics, 16, 513-522 (1974) · Zbl 0294.62081 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.