Orthogonal rank-one matrix pursuit for low rank matrix completion. (English) Zbl 1315.65044

Summary: In this paper, we propose an efficient and scalable low rank matrix completion algorithm. The key idea is to extend the orthogonal matching pursuit method from the vector case to the matrix case. We further propose an economic version of our algorithm by introducing a novel weight updating rule to reduce the time and storage complexity. Both versions are computationally inexpensive for each matrix pursuit iteration and find satisfactory results in a few iterations. Another advantage of our proposed algorithm is that it has only one tunable parameter, which is the rank. It is easy to understand and to use by the user. This becomes especially important in large-scale learning problems. In addition, we rigorously show that both versions achieve a linear convergence rate, which is significantly better than the previous known results. We also empirically compare the proposed algorithms with several state-of-the-art matrix completion algorithms on many real-world datasets, including the large-scale recommendation dataset Netflix as well as the MovieLens datasets. Numerical results show that our proposed algorithm is more efficient than competing algorithms while achieving similar or better prediction performance.


65F30 Other matrix algorithms (MSC2010)
15A83 Matrix completion problems
65F10 Iterative numerical methods for linear systems
65F20 Numerical solutions to overdetermined systems, pseudoinverses
Full Text: DOI arXiv


[1] A. Argyriou, T. Evgeniou, and M. Pontil, {\it Convex multi-task feature learning}, Mach. Learn., 73 (2008), pp. 243-272. · Zbl 1470.68073
[2] F. Bach, {\it Consistency of trace norm minimization}, J. Mach. Learn. Res., 9 (2008), pp. 1019-1048. · Zbl 1225.68146
[3] L. Balzano, R. Nowak, and B. Recht, {\it Online identification and tracking of subspaces from highly incomplete information}, in Proceedings of the Allerton Conference on Communication, Control and Computing, 2010.
[4] R. Bell and Y. Koren, {\it Lessons from the netflix prize challenge}, ACM SIGKDD Explorations, 9 (2007), pp. 75-79.
[5] J. Bennett and S. Lanning, {\it The netflix prize}, in Proceedings of KDD Cup and Workshop, 2007.
[6] J.-F. Cai, E. J. Candès, and Z. Shen, {\it A singular value thresholding algorithm for matrix completion}, SIAM J. Optim., 20 (2010), pp. 1956-1982. · Zbl 1201.90155
[7] E. J. Candès and B. Recht, {\it Exact matrix completion via convex optimization}, Found. Comput. Math., 9 (2009), pp. 717-772. · Zbl 1219.90124
[8] R. A. DeVore and V. N. Temlyakov, {\it Some remarks on greedy algorithms}, Adv. Comput. Math., 5 (1996), pp. 173-187. · Zbl 0857.65016
[9] M. Dudík, Z. Harchaoui, and J. Malick, {\it Lifted coordinate descent for learning with trace-norm regularization}, in Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS), 2012.
[10] M. Frank and P. Wolfe, {\it An algorithm for quadratic programming}, Naval Res. Logist. Quart., 3 (1956), pp. 95-110.
[11] J. H. Friedman, T. Hastie, and R. Tibshirani, {\it Regularization paths for generalized linear models via coordinate descent}, J. Statist. Software, 33 (2010), pp. 1-22.
[12] K. Goldberg, T. Roeder, D. Gupta, and C. Perkins, {\it Eigentaste: A constant time collaborative filtering algorithm}, Inform. Retrieval, 4 (2001), pp. 133-151. · Zbl 0989.68052
[13] G. H. Golub and C. F. V. Loan, {\it Matrix Computations}, 3rd ed., Johns Hopkins University Press, Baltimore, MD, 1996. · Zbl 0865.65009
[14] T. Hastie, R. Tibshirani, and J. H. Friedman, {\it The elements of statistical learning: Data mining, inference, and prediction}, Springer-Verlag, New York, 2009. · Zbl 1273.62005
[15] E. Hazan, {\it Sparse approximate solutions to semidefinite programs}, in Proceedings of the 8th Latin American Conference on Theoretical Informatics, 2008. · Zbl 1136.90430
[16] Q. Huynh-Thu and M. Ghanbari, {\it Scope of validity of psnr in image/video quality assessment}, Electron. Lett., 44 (2008), pp. 800-801.
[17] M. Jaggi and M. Sulovský, {\it A simple algorithm for nuclear norm regularized problems}, in Proceedings of the 27th International Conference on Machine Learning (ICML), 2010, pp. 471-478.
[18] P. Jain, R. Meka, and I. S. Dhillon, {\it Guaranteed rank minimization via singular value projection}, Adv. Neural Inf. Process. Syste. 22 (2010), pp. 937-945.
[19] P. Jain, P. Netrapalli, and S. Sanghavi, {\it Low-rank matrix completion using alternating minimization}, in Proceedings of the 45th Annual ACM Symposium on Symposium on Theory of Computing (STOC), 2013, pp. 665-674. · Zbl 1293.65073
[20] S. Ji and J. Ye, {\it An accelerated gradient method for trace norm minimization}, in Proceedings of the 26th International Conference on Machine Learning (ICML), 2009, pp. 457-464.
[21] R. Keshavan and S. Oh, {\it Optspace: A Gradient Descent Algorithm on the Grassmann Manifold for Matrix Completion}, http://arxiv.org/abs/0910.5260 (2009).
[22] Y. Koren, {\it Factorization meets the neighborhood: A multifaceted collaborative filtering model}, in Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2008.
[23] Y. Koren, R. Bell, and C. Volinsky, {\it Matrix factorization techniques for recommender systems}, Computer, 42 (2009), pp. 30-37.
[24] M.-J. Lai, Y. Xu, and W. Yin, {\it Improved iteratively reweighted least squares for unconstrained smoothed \(ℓ_q\) minimization}, SIAM J. Numer. Anal., 51 (2013), pp. 927-957. · Zbl 1268.49038
[25] K. Lee and Y. Bresler, {\it Admira: atomic decomposition for minimum rank approximation}, IEEE Trans. Inform. Theory, 56 (2010), pp. 4402-4416. · Zbl 1366.94112
[26] E. Liu and T. N. Temlyakov, {\it The orthogonal super greedy algorithm and applications in compressed sensing}, IEEE Trans. Inform. Theory, 58 (2012), pp. 2040-2047. · Zbl 1365.94180
[27] Y.-J. Liu, D. Sun, and K.-C. Toh, {\it An implementable proximal point algorithmic framework for nuclear norm minimization}, Math. Program., 133 (2012), pp. 399-436. · Zbl 1262.90125
[28] Z. Lu and Y. Zhang, {\it Penalty Decomposition Methods for Rank Minimization}, http://arxiv.org/abs/1008.5373 (2010).
[29] S. Ma, D. Goldfarb, and L. Chen, {\it Fixed point and bregman iterative methods for matrix rank minimization}, Math. Program., 128 (2011), pp. 321-353. · Zbl 1221.65146
[30] R. Mazumder, T. Hastie, and R. Tibshirani, {\it Spectral regularization algorithms for learning large incomplete matrices}, J. Mach. Learn. Res., 99 (2010), pp. 2287-2322. · Zbl 1242.68237
[31] B. N. Miller, I. Albert, S. K. Lam, J. A. Konstan, and J. Riedl, {\it MovieLens unplugged: Experiences with an occasionally connected recommender system}, in Proceedings of the 8th International Conference on Intelligent User Interfaces, 2003, pp. 263-266.
[32] B. Mishra, G. Meyer, F. Bach, and R. Sepulchre, {\it Low-rank optimization with trace norm penalty}, SIAM J. Optim., 23 (2013), pp. 2124-2149. · Zbl 1286.65078
[33] D. Needell and J. A. Tropp, {\it Cosamp: Iterative signal recovery from incomplete and inaccurate samples}, Comm. ACM, 53 (2010), pp. 93-100. · Zbl 1163.94003
[34] S. Negahban and M. Wainwright, {\it Estimation of (near) low-rank matrices with noise and high-dimensional scaling}, in Proceedings of the 27th International Conference on Machine Learning (ICML), 2010. · Zbl 1216.62090
[35] Y. C. Pati, R. Rezaiifar, Y. C. P. R. Rezaiifar, and P. S. Krishnaprasad, {\it Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition}, in Proceedings of the 27th Annual Asilomar Conference on Signals, Systems, and Computers, 1993, pp. 40-44.
[36] B. Recht, M. Fazel, and P. A. Parrilo, {\it Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization}, SIAM Rev., 52 (2010), pp. 471-501. · Zbl 1198.90321
[37] B. Recht and C. Ré, {\it Parallel stochastic gradient algorithms for large-scale matrix completion}, Math. Program. Comput., 5 (2013), pp. 201-226. · Zbl 1275.90039
[38] S. Shalev-Shwartz, A. Gonen, and O. Shamir, {\it Large-scale convex minimization with a low-rank constraint}, in Proceedings of the 28th International Conference on Machine Learning (ICML), 2011, pp. 329-336.
[39] S. Shalev-Shwartz and A. Tewari, {\it Stochastic methods for l}1 {\it regularized loss minimization}, in Proceedings of the 26th International Conference on Machine Learning (ICML), 2009, pp. 929-936.
[40] N. Srebro, J. Rennie, and T. Jaakkola, {\it Maximum-margin matrix factorizations}, Adv. Neural Inf. Process. Syst., 17 (2004), pp. 1329-1336.
[41] V. N. Temlyakov, {\it Greedy approximation}, Acta Numer., 17 (2008), pp. 235-409. · Zbl 1178.65050
[42] A. Tewari, P. Ravikumar, and I. S. Dhillon, {\it Greedy algorithms for structurally constrained high dimensional problems}, Adv. Neural Inf. Process. Syst., 24 (2011), pp. 882-890.
[43] R. Tibshirani, {\it Regression shrinkage and selection via the lasso}, J. R. Stat. Soc. Ser. B, 58 (1994), pp. 267-288. · Zbl 0850.62538
[44] K.-C. Toh and S. Yun, {\it An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems}, Pacific J. Optim., 6 (2010), pp. 615-640. · Zbl 1205.90218
[45] J. A. Tropp, {\it Greed is good: Algorithmic results for sparse approximation}, IEEE Trans. Inform. Theory, 50 (2004), pp. 2231-2242. · Zbl 1288.94019
[46] Z. Wen, W. Yin, and Y. Zhang, {\it Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm}, Math. Program. Comput., 4 (2012), pp. 333-361. · Zbl 1271.65083
[47] T. T. Wu and K. Lange, {\it Coordinate descent algorithms for lasso penalized regression}, Ann. Appl. Stat., 2 (2008), pp. 224-244. · Zbl 1137.62045
[48] S. Yun and K.-C. Toh, {\it A coordinate gradient descent method for l}1{\it -regularized convex minimization}, Comput. Optim. Appl., 48 (2011), pp. 273-307. · Zbl 1220.90092
[49] X. Zhang, Y. Yu, and D. Schuurmans, {\it Accelerated training for matrix-norm regularization: A boosting approach}, Adv. Neural Inf. Process. Syst., 25 (2012), pp. 2906-2914.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.