×

Matrix recipes for hard thresholding methods. (English) Zbl 1311.90141

This paper deals with a new set of low-rank recovery algorithms for linear inverse problems within the class of hard thresholding methods. The authors present new strategies and review existing ones for hard thresholding methods to recover low-rank matrices from dimensionality reducing, linear projections. The discussion revolves around four basic building blocks that exploit the problem structure to reduce computational complexity without sacrificing stability. The complexity analysis of the proposed algorithms is provided. Two acceleration schemes are considered. The authors improve the convergence speed by exploiting randomized low rank projections and provide empirical support for their claims through experimental results on synthetic and real data.

MSC:

90C30 Nonlinear programming
90C49 Extreme-point and pivoting methods
PDFBibTeX XMLCite
Full Text: DOI arXiv

References:

[1] Baraniuk, R.G., Cevher, V., Wakin, M.B.: Low-dimensional models for dimensionality reduction and signal recovery: a geometric perspective. Proc. IEEE 98(6), 959-971 (2010)
[2] Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717-772 (2009) · Zbl 1219.90124
[3] Meka, R.; Jain, P.; Dhillon, I. S., Guaranteed rank minimization via singular value projection (2010)
[4] Tyagi, H.; Cevher, V., Learning ridge functions with randomized sampling in high dimensions, 2025-2028 (2012), New York
[5] Tyagi, H., Cevher, V.: Learning non-parametric basis independent models from point queries via low-rank methods. Technical report, EPFL (2012) · Zbl 1373.68336
[6] Liu, Y.K.: Universal low-rank matrix recovery from Pauli measurements (2011) · Zbl 1366.62111
[7] Tyagi, H.; Cevher, V., Active learning of multi-index function models, No. 25, 1475-1483 (2012)
[8] Candes, E.J., Li, X.: Solving quadratic equations via phaselift when there are about as many equations as unknowns (2012). Preprint arXiv:1208.6247 · Zbl 1312.90054
[9] Bennett, J.; Lanning, S., The netflix prize (2007)
[10] Candes, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM 58(3) (2011) · Zbl 1327.62369
[11] Kyrillidis, A., Cevher, V.: Matrix alps: Accelerated low rank and sparse matrix reconstruction. Technical report, EPFL (2012)
[12] Waters, A. E.; Sankaranarayanan, A. C.; Baraniuk, R. G., Sparcs: recovering low-rank and sparse matrices from compressive measurements (2011)
[13] Fazel, M., Recht, B., Parrilo, P.A.: Guaranteed minimum rank solutions to linear matrix equations via nuclear norm minimization. SIAM Rev. 52(3), 471-501 (2010) · Zbl 1198.90321
[14] Liu, Z., Vandenberghe, L.: Interior-point method for nuclear norm approximation with application to system identification. SIAM J. Matrix Anal. Appl. 31, 1235-1256 (2009) · Zbl 1201.90151
[15] Mohan, K.; Fazel, M., Reweighted nuclear norm minimization with application to system identification (2010), New York
[16] Cai, J.-F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20, 1956-1982 (2010) · Zbl 1201.90155
[17] Recht, B., Re, C.: Parallel stochastic gradient algorithms for large-scale matrix completion. Preprint (2011) · Zbl 1275.90039
[18] Lin, Z., Chen, M., Ma, Y.: The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices (2010). preprint arXiv:1009.5055 · Zbl 1367.94082
[19] Wright, J., Wu, L., Chen, M., Lin, Z., Ganesh, A., Ma, Y.: Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. UIUC Technical Report UILU-ENG-09-2214 · Zbl 1163.94003
[20] Natarajan, B.K.: Sparse approximate solutions to linear systems. SIAM J. Comput. 24(2), 227-234 (1995) · Zbl 0827.68054
[21] Lee, K., Bresler, Y.: Admira: atomic decomposition for minimum rank approximation. IEEE Trans. Inf. Theory 56(9), 4402-4416 (2010) · Zbl 1366.94112
[22] Goldfarb, D., Ma, S.: Convergence of fixed-point continuation algorithms for matrix rank minimization. Found. Comput. Math. 11, 183-210 (2011) · Zbl 1219.90195
[23] Beck, A.; Teboulle, M., A linearly convergent algorithm for solving a class of nonconvex/affine feasibility problems, 33-48 (2011) · Zbl 1242.90225
[24] Kyrillidis, A.; Cevher, V., Recipes on hard thresholding methods, Dec. · Zbl 1311.90141
[25] Kyrillidis, A.; Cevher, V., Combinatorial selection and least absolute shrinkage via the Clash algorithm, July
[26] Halko, N., Martinsson, P.G., Tropp, J.A.: Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53, 217-288 (2011) · Zbl 1269.65043
[27] Bertsekas, D.: Nonlinear Programming. Athena Scientific, Nashua (1995) · Zbl 0935.90037
[28] Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge Univ. Press, Cambridge (1990) · Zbl 0704.15002
[29] Cohen, A., Dahmen, W., DeVore, R.: Compressed sensing and best k-term approximation. J. Am. Math. Soc. 22(1), 211-231 (2009) · Zbl 1206.94008
[30] Tropp, J.A.: Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50(10), 2231-2242 (2004) · Zbl 1288.94019
[31] Cevher, V., An alps view of sparse recovery, 5808-5811 (2011), New York
[32] Needell, D., Tropp, J.A.: Cosamp: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301-321 (2009) · Zbl 1163.94003
[33] Dai, W., Milenkovic, O.: Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 55, 2230-2249 (2009) · Zbl 1367.94082
[34] Foucart, S.: Hard thresholding pursuit: an algorithm for compressed sensing. SIAM J. Numer. Anal. 49(6), 2543-2563 (2011) · Zbl 1242.65060
[35] Blumensath, T., Davies, M.E.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265-274 (2009) · Zbl 1174.94008
[36] Garg, R.; Khandekar, R., Gradient descent with sparsification: an iterative algorithm for sparse recovery with restricted isometry property (2009), New York
[37] Blumensath, T., Davies, M.E.: Normalized iterative hard thresholding: guaranteed stability and performance. IEEE J. Sel. Top. Signal Process. 4(2), 298-309 (2010)
[38] Blumensath, T.: Accelerated iterative hard thresholding. Signal Process. 92, 752-756 (2012)
[39] Tanner, J., Wei, K.: Normalized iterative hard thresholding for matrix completion. Preprint (2012) · Zbl 1282.65043
[40] Coifman, R., Geshwind, F., Meyer, Y.: Noiselets. Appl. Comput. Harmon. Anal. 10(1), 27-44 (2001) · Zbl 1030.42027
[41] Foucart, S., Sparse recovery algorithms: sufficient conditions in terms of restricted isometry constants (2010)
[42] Nesterov, Y.: Gradient methods for minimizing composite objective function. core discussion papers 2007076, université catholique de louvain. Center for Operations Research and Econometrics (CORE) (2007)
[43] Nesterov, Y.: Introductory Lectures on Convex Optimization. Kluwer Academic, Dordrecht (1996) · Zbl 1086.90045
[44] Drineas, P., Frieze, A., Kannan, R., Vempala, S., Vinay, V.: Clustering large graphs via the singular value decomposition. Mach. Learn. 56(1), 9-33 (2004) · Zbl 1089.68090
[45] Drineas, P., Kannan, R., Mahoney, M.W.: Fast Monte Carlo algorithms for matrices ii: computing a low-rank approximation to a matrix. SIAM J. Comput. 36, 158-183 (2006) · Zbl 1111.68148
[46] Deshpande, A.; Rademacher, L.; Vempala, S.; Wang, G., Matrix approximation and projective clustering via volume sampling, New York, NY, USA, New York · Zbl 1192.68889
[47] Deshpande, A., Vempala, S.: Adaptive sampling and fast low-rank matrix approximation. Electron. Colloq. Comput. Complex. 13, 042 (2006) · Zbl 1155.68575
[48] Keshavan, R.H., Montanari, A., Oh, S.: Matrix completion from a few entries. IEEE Trans. Inf. Theory 56(6), 2980-2998 (2010) · Zbl 1366.62111
[49] Balzano, L.; Nowak, R.; Recht, B., Online identification and tracking of subspaces from highly incomplete information, 704-711 (2010), New York
[50] He, J., Balzano, L., Lui, J.C.S.: Online robust subspace tracking from partial information (2011). arXiv:1109.3827
[51] Boumal, N.; Absil, P. A., Rtrmc: a Riemannian trust-region method for low-rank matrix completion (2011)
[52] Wen, Z., Yin, W., Zhang, Y.: Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Rice University CAAM Technical Report TR10-07 (2010) Submitted · Zbl 1271.65083
[53] Larsen, R.M.: Propack: Software for large and sparse svd calculations. http://soi.stanford.edu/ rmunk/PROPACK
[54] Shi, X., Yu, P.S.: Limitations of matrix completion via trace norm minimization. ACM SIGKDD Explor. Newsl. 12(2), 16-20 (2011)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.