×

Matrix completion under interval uncertainty. (English) Zbl 1394.90462

Summary: Matrix completion under interval uncertainty can be cast as a matrix completion problem with element-wise box constraints. We present an efficient alternating-direction parallel coordinate-descent method for the problem. We show that the method outperforms any other known method on a benchmark in image in-painting in terms of signal-to-noise ratio, and that it provides high-quality solutions for an instance of collaborative filtering with 100,198,805 recommendations within 5 minutes on a single personal computer.

MSC:

90C25 Convex programming
65K05 Numerical mathematical programming methods
65F30 Other matrix algorithms (MSC2010)
15A83 Matrix completion problems
PDFBibTeX XMLCite
Full Text: DOI arXiv

References:

[1] Alaíz, C. M.; Dinuzzo, F.; Sra, S., Correlation matrix nearness and completion under observation uncertainty, IMA Journal of Numerical Analysis, 35, 1, 325-340 (2013) · Zbl 1309.65066
[2] Bell, R.; Koren, Y., Scalable collaborative filtering with jointly derived neighborhood interpolation weights, IEEE 7th international conference on data mining, 43-52 (2007), IEEE
[3] Cai, J.-F.; Candès, E. J.; Shen, Z., A singular value thresholding algorithm for matrix completion, SIAM Journal on Optimization, 20, 4, 1956-1982 (2010) · Zbl 1201.90155
[4] Candès, E. J.; Li, X.; Ma, Y.; Wright, J., Robust principal component analysis?, Journal of the ACM, 58, 3, 11 (2011) · Zbl 1327.62369
[5] Candès, E. J.; Plan, Y., Matrix completion with noise, Proceedings of the IEEE, 98, 6, 925-936 (2010)
[6] Candès, E. J.; Recht, B., Exact matrix completion via convex optimization, Foundations of Computational Mathematics, 9, 6, 717-772 (2009) · Zbl 1219.90124
[7] Chen, Y.; Chi, Y., Spectral compressed sensing via structured matrix completion, Proceedings of the 30th international conference on machine learning (ICML 13), 414-422 (2013), ACM: ACM New York, NY, USA
[8] Chen, Y.; Xu, H.; Caramanis, C.; Sanghavi, S., Robust matrix completion with corrupted columns, Proceedings of the 28th international conference on machine learning (icml 11), 873-880 (2011), ACM: ACM New York, NY, USA
[9] Chistov, A. L.; Grigoriev, D., Complexity of quantifier elimination in the theory of algebraically closed fields, Proceedings of the mathematical foundations of computer science 1984, 17-31 (1984), Springer-Verlag
[10] Fazel, M., Matrix Rank Minimization with Applications (2002), Stanford University, Ph.D. thesis
[11] Gabrel, V.; Murat, C.; Thiele, A., Recent advances in robust optimization: an overview, European Journal of Operational Research, 235, 3, 471-483 (2014) · Zbl 1305.90390
[12] Gemulla, R.; Nijkamp, E.; Haas, P. J.; Sismanis, Y., Large-scale matrix factorization with distributed stochastic gradient descent, Proceedings of the 17th ACM SIGKDD international conference on knowledge discovery and data mining, 69-77 (2011), ACM: ACM New York, NY, USA
[13] Goldfarb, D.; Ma, S.; Wen, Z., Solving low-rank matrix completion problems efficiently, 47th annual Allerton conference on communication, control, and computing, 1013-1020 (2009), IEEE: IEEE Piscataway, NJ, USA
[14] Haldar, J.; Hernando, D., Rank-constrained solutions to linear matrix equations using power factorization, IEEE Signal Processing Letters, 16, 7, 584-587 (2009)
[15] Halko, N.; Martinsson, P.-G.; Tropp, J. A., Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions, SIAM Review, 53, 2, 217-288 (2011) · Zbl 1269.65043
[16] Harvey, N. J.; Karger, D. R.; Yekhanin, S., The complexity of matrix completion, Proceedings of the seventeenth annual ACM-SIAM symposium on discrete algorithm, 1103-1111 (2006), Society for Industrial and Applied Mathematics: Society for Industrial and Applied Mathematics Philadelphia, PA, USA · Zbl 1192.68322
[17] Jaggi, M.; Sulovský, M., A simple algorithm for nuclear norm regularized problems, Proceedings of the 27th international conference on machine learning (icml 10), 471-478 (2010), ACM: ACM New York, NY, USA
[18] Jain, P.; Meka, R.; Dhillon, I. S., Guaranteed rank minimization via singular value projection, Advances in neural information processing systems 23 (NIPS 2010), 937-945 (2010), Curran Associates, Inc: Curran Associates, Inc Red Hook, NY, USA
[19] Jain, P.; Netrapalli, P.; Sanghavi, S., Low-rank matrix completion using alternating minimization, Proceedings of the forty-fifth annual ACM symposium on theory of computing, 665-674 (2013) · Zbl 1293.65073
[20] Jeyakumar, V.; Srisatkunarajah, S., Geometric conditions for Kuhn-Tucker sufficiency of global optimality in mathematical programming, European Journal of Operational Research, 194, 2, 363-367 (2009) · Zbl 1154.90590
[21] Kannan, R.; Ishteva, M.; Park, H., Bounded matrix low rank approximation, 2012 ieee 12th international conference on data mining, 319-328 (2012), IEEE: IEEE Piscataway, NJ, USA
[22] Keshavan, R. H., Efficient Algorithms for Collaborative Filtering (2012), Stanford University, Ph.D. thesis
[23] Keshavan, R. H.; Montanari, A.; Oh, S., Matrix completion from a few entries, IEEE Transactions on Information Theory, 56, 6, 2980-2998 (2010) · Zbl 1366.62111
[24] Koren, Y.; Bell, R.; Volinsky, C., Matrix factorization techniques for recommender systems, Computer Journal, 42, 8, 30-37 (2009)
[25] Lee, K.; Bresler, Y., Admira: atomic decomposition for minimum rank approximation, IEEE Transactions on Information Theory, 56, 9, 4402-4416 (2010) · Zbl 1366.94112
[26] Li, B.; Tata, S.; Sismanis, Y., Sparkler: Supporting large-scale matrix factorization, Proceedings of the 16th international conference on extending database technology, 625-636 (2013), ACM: ACM New York, NY, USA
[27] Li, G.; Ma, A. K.C.; Pong, T. K., Robust least square semidefinite programming with applications, Computational Optimization and Applications, 58, 2, 347-379 (2014) · Zbl 1330.90061
[28] Makari, F.; Teflioudi, C.; Gemulla, R.; Haas, P.; Sismanis, Y., Shared-memory and shared-nothing stochastic gradient descent algorithms for matrix completion, Knowledge and Information Systems, 42, 3, 493-523 (2015)
[29] Mareček, J.; Richtárik, P.; Takáč, M., Distributed block coordinate descent for minimizing partially separable functions, Numerical analysis and optimization. Numerical analysis and optimization, Springer Proceedings in Mathematics and Statistics, Vol. 134, 261-288 (2015), Springer International Publishing: Springer International Publishing Cham, Switzerland · Zbl 1330.65089
[30] Mazumder, R.; Hastie, T.; Tibshirani, R., Spectral regularization algorithms for learning large incomplete matrices, Journal of Machine Learning Research, 11, 2287-2322 (2010) · Zbl 1242.68237
[31] Mehta, B.; Hofmann, T.; Nejdl, W., Robust collaborative filtering, Proceedings of the 2007 ACM conference on recommender systems, 49-56 (2007), ACM: ACM New York, NY, USA
[32] Mnih, A.; Salakhutdinov, R., Probabilistic matrix factorization, Advances in neural information processing systems 20, 1257-1264 (2007), Curran Associates, Inc: Curran Associates, Inc Red Hook, NY, USA
[33] Natarajan, B. K., Sparse approximate solutions to linear systems, SIAM Journal on Computing, 24, 2, 227-234 (1995) · Zbl 0827.68054
[34] Nesterov, Y., Efficiency of coordinate descent methods on huge-scale optimization problems, SIAM Journal on Optimization, 22, 2, 341-362 (2012) · Zbl 1257.90073
[35] Olafsson, S.; Li, X.; Wu, S., Operations research and data mining, European Journal of Operational Research, 187, 3, 1429-1448 (2008) · Zbl 1137.90776
[36] Petropoulos, F.; Makridakis, S.; Assimakopoulos, V.; Nikolopoulos, K., ‘Horses for courses’ in demand forecasting, European Journal of Operational Research, 237, 1, 152-163 (2014) · Zbl 1304.62117
[37] Recht, B.; Ré, C.; Wright, S. J.; Niu, F., Hogwild: A lock-free approach to parallelizing stochastic gradient descent, Advances in neural information processing systems 24 (nips 2011), 693-701 (2011), Curran Associates, Inc: Curran Associates, Inc Red Hook, NY, USA
[38] Rennie, J. D.M.; Srebro, N., Fast maximum margin matrix factorization for collaborative prediction, Proceedings of the 31st international conference on machine learning (icml 14), 713-719 (2005), ACM: ACM New York, NY, USA
[39] Richtárik, P.; Takáč, M., Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function, Mathematical Programming, 144, 2, 1-38 (2014) · Zbl 1301.65051
[40] Richtárik, P.; Takáč, M., On optimal probabilities in stochastic coordinate descent methods, Optimization Letters (2015)
[41] Richtárik, P.; Takáč, M., Distributed coordinate descent method for learning with big data, Journal of Machine Learning Research, 17, 75, 1-25 (2016) · Zbl 1360.68709
[42] Richtárik, P.; Takáč, M., Parallel coordinate descent methods for big data optimization, Mathematical Programming, 156, 1, 433-484 (2016) · Zbl 1342.90102
[43] Sarwar, B.; Karypis, G.; Konstan, J.; Riedl, J., Application of dimensionality reduction in recommender system-a case study, Proceedings of web mining for e-commerce - challenges and opportunities, august 20, 2000, boston, ma, held in conjunction with the acm-sigkdd conference on knowledge discovery in databases (2000), ACM: ACM New York, NY, USA
[44] Shalev-Shwartz, S.; Gonen, A.; Shamir, O., Large-scale convex minimization with a low-rank constraint, Proceedings of the 28th international conference on machine learning (icml 11), 329-336 (2011), ACM: ACM New York, NY, USA
[45] Soyster, A. L., Convex programming with set-inclusive constraints and applications to inexact linear programming, Operations Research, 21, 5, 1154-1157 (1973) · Zbl 0266.90046
[46] Srebro, N., Learning with Matrix Factorizations (2004), MIT, Ph.D. thesis
[47] Srebro, N.; Rennie, J.; Jaakkola, T. S., Maximum-margin matrix factorization, Advances in neural information processing systems 17 (nips 2004), 1329-1336 (2004), Curran Associates, Inc: Curran Associates, Inc Red Hook, NY, USA
[48] Sun, R.; Luo, Z. Q., Guaranteed matrix completion via nonconvex factorization, Ieee 56th annual symposium on foundations of computer science (focs), 270-289 (2015), IEEE: IEEE Piscataway, NJ, USA
[49] Tanner, J.; Wei, K., Normalized iterative hard thresholding for matrix completion, SIAM Journal on Scientific Computing, 35, 5 (2013) · Zbl 1282.65043
[50] Tappenden, R.; Richtárik, P.; Büke, B., Separable approximations and decomposition methods for the augmented Lagrangian, Optimization Methods and Software, 30, 3, 463-668 (2015) · Zbl 1325.90065
[52] Teflioudi, C.; Makari, F.; Gemulla, R., Distributed matrix completion, 2012 ieee 12th international conference on data mining, 655-664 (2012), IEEE: IEEE Piscataway, NJ, USA
[53] Tomioka, R.; Suzuki, T.; Sugiyama, M.; Kashima, H., A fast augmented lagrangian algorithm for learning low-rank matrices, Proceedings of the 27th annual international conference on machine learning (icml 2010), 1087-1094 (2010), ACM: ACM New York, NY, USA
[54] Wang, Z.; Lai, M.-J.; Lu, Z.; Fan, W.; Davulcu, H.; Ye, J., Rank-one matrix pursuit for matrix completion, Proceedings of the 31st international conference on machine learning (icml 14), 91-99 (2014), ACM: ACM New York, NY, USA
[55] Wright, J.; Ganesh, A.; Rao, S.; Peng, Y.; Ma, Y., Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization, Advances in neural information processing systems 22 (nips 2009), 2080-2088 (2009), Curran Associates, Inc: Curran Associates, Inc Red Hook, NY, USA
[56] Ye, J., Generalized low rank approximations of matrices, Machine Learning Journal, 61, 1-3, 167-191 (2005) · Zbl 1087.65043
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.