×

Matrix completion and low-rank SVD via fast alternating least squares. (English) Zbl 1352.65117

Summary: The matrix-completion problem has attracted a lot of attention, largely as a result of the celebrated Netflix competition. Two popular approaches for solving the problem are nuclear-norm-regularized matrix approximation [E. Candès and T. Tao, “The power of convex relaxation: near-optimal matrix completion”, IEEE Trans. Inf. Theory 56, No. 5, 2053–2080 (2010; doi:10.1109/TIT.2010.2044061); the first author et al., J. Mach. Learn. Res. 11, 2287–2322 (2010; Zbl 1242.68237)], and maximum-margin matrix factorization [N. Srebro, J. D. M. Rennie and T. S. Jaakola, “Maximum-margin matrix factorization”, in: Advances in neural information processing systems 17. 1329–1336 (2004), http://papers.nips.cc/paper/2655-maximum-margin-matrix-factorization.pdf]. These two procedures are in some cases solving equivalent problems, but with quite different algorithms. In this article we bring the two approaches together, leading to an efficient algorithm for large matrix factorization and completion that outperforms both of these. We develop a software package softImpute in R for implementing our approaches, and a distributed version for very large matrices using the Spark cluster programming environment

MSC:

65F15 Numerical computation of eigenvalues and eigenvectors of matrices
15A83 Matrix completion problems
62H12 Estimation in multivariate analysis

Citations:

Zbl 1242.68237
PDFBibTeX XMLCite
Full Text: arXiv Link