Minimum \( n\)-rank approximation via iterative hard thresholding. (English) Zbl 1338.15053

Summary: The problem of recovering a low \( n\)-rank tensor is an extension of sparse recovery problem from the low dimensional space (matrix space) to the high dimensional space (tensor space) and has many applications in computer vision and graphics such as image inpainting and video inpainting. In this paper, we consider a new tensor recovery model, named as minimum \( n\)-rank approximation (MnRA), and propose an appropriate iterative hard thresholding algorithm with giving the upper bound of the \( n\)-rank in advance. The convergence analysis of the proposed algorithm is also presented. Particularly, we show that for the noiseless case, the linear convergence with rate \(\frac{1}{2}\) can be obtained for the proposed algorithm under proper conditions. Additionally, combining an effective heuristic for determining \(n\)-rank, we can also apply the proposed algorithm to solve MnRA when \( n\)-rank is unknown in advance. Some preliminary numerical results on randomly generated and real low \(n\)-rank tensor completion problems are reported, which show the efficiency of the proposed algorithms.


15A69 Multilinear algebra, tensor calculus
Full Text: DOI arXiv


[1] Goldberg, D.; Nichols, D.; Oki, B. M.; Terry, D., Using collaborative filtering to weave an information tapestry, Commun. ACM, 35, 61-70 (1992)
[2] Argyriou, A.; Evgeniou, T.; Pontil, M., Convex multi-task feature learning, Mach. Learn., 73, 243-272 (2008) · Zbl 1470.68073
[3] Liu, Z.; Vandenberghe, L., Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. Appl., 31, 1235-1256 (2009) · Zbl 1201.90151
[4] Biswas, P.; Lian, T. C.; Wang, T. C.; Ye, Y., Semidefinite programming based algorithms for sensor network localization, ACM Trans. Sensor Network, 2, 188-220 (2006)
[5] Recht, B.; Fazel, M.; Parrilo, P. A., Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52, 471-501 (2010) · Zbl 1198.90321
[6] Candès, E. J.; Recht, B., Exact matrix completion via convex optimization, Found. Comput. Math., 9, 717-772 (2009) · Zbl 1219.90124
[7] Haldar, J.; Hernando, D., Rank-constrained solutions to linear matrix equations using power factorization, IEEE Signal Process. Lett., 16, 584-587 (2009)
[8] Keshavan, R.; Montanari, A.; Oh, S., Matrix completion from a few entries, IEEE Trans. Inform. Theory, 56, 2980-2998 (2010) · Zbl 1366.62111
[9] Lee, K.; Bresler, Y., Admira: atomic decomposition for minimum rank approximation, IEEE Trans. Inform. Theory, 56, 4402-4416 (2010) · Zbl 1366.94112
[11] Dai, W.; Milenkovic, O.; Kerman, E., Subspace evolution and transfer (SET) for low-rank matrix completion, IEEE Trans. Signal Process., 59, 3120-3132 (2011) · Zbl 1392.94167
[13] Goldfarb, D.; Ma, S., Convergence of fixed point continuation algorithms for matrix rank minimization, Found. Comput. Math., 11, 183-210 (2011) · Zbl 1219.90195
[15] Gandy, S.; Recht, B.; Yamada, I., Tensor completion and low-n-rank tensor recovery via convex optimization, Inv. Probl., 27, 025010 (2011), 19pp · Zbl 1211.15036
[18] Signoretto, M.; Tran Dinh, Q.; De Lathauwer, L.; Suykens, J. A.K., Learning with tensors: a framework based on convex optimization and spectral regularization, Mach. Learn., 94, 303-351 (2013) · Zbl 1319.68191
[19] Yang, L.; Huang, Z. H.; Shi, X. J., A fixed point iterative method for low n-rank tensor pursuit, IEEE Trans. Signal Process., 61, 11, 2952-2962 (2013) · Zbl 1393.90090
[21] Goldfarb, D.; Qin, Z. W., Robust low-rank tensor recovery: models and algorithms, SIAM J. Matrix Anal. Appl., 35, 225-253 (2014) · Zbl 1296.65086
[22] Kolda, T. G.; Bader, B. W., Tensor decompositions and applications, SIAM Rev., 51, 457-464 (2009)
[23] De Lathauwer, L.; De Moor, B.; Vandewalle, J., On the best rank-1 and rank-\((R_1, R_2, \ldots, R_N)\) approximation of higher-order tensors, SIAM J. Matrix Anal. Appl., 21, 1324-1342 (2000) · Zbl 0958.15026
[24] Blumensath, T.; Davies, M. E., Iterative thresholding for sparse approximations, J. Fourier Anal. Appl., 14, 629-654 (2008) · Zbl 1175.94060
[26] Blumensath, T.; Davies, M. E., Iterative hard thresholding for compressed sensing, Appl. Comput. Harmon. Anal., 27, 265-274 (2009) · Zbl 1174.94008
[27] Blumensath, T., Accelerated iterative hard thresholding, Signal Process., 92, 752-756 (2012)
[28] Kyrillidis, A.; Cevher, V., Matrix recipes for hard thresholding methods, J. Math. Imag. Vis., 48, 235-265 (2014) · Zbl 1311.90141
[29] Shi, Z. Q.; Han, J. Q.; Zheng, T. R.; Li, J., Guarantees of augmented trace norm models in tensor recovery (2013), Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence: Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence AAAI Press, pp. 1670-1676
[32] Drineas, P.; Kannan, R.; Mahoney, M. W., Fast Monte Carlo algorithms for matrices II: computing low-rank approximations to a matrix, SIAM J. Comput., 36, 158-183 (2006) · Zbl 1111.68148
[33] Ma, S. Q.; Goldfarb, D.; Chen, L. F., Fixed point and Bregman iterative methods for matrix rank minimization, Math. Program., 128, 321-353 (2011) · Zbl 1221.65146
[35] Andersson, C. A.; Bro, R., The N-way toolbox for MATLAB, Chemom. Intell. Lab. Syst., 52, 1-4 (2000)
[38] Zhang, M.; Huang, Z. H.; Zhang, Y., Restricted p-isometry properties of nonconvex matrix recovery, IEEE Trans. Inf. Theory, 59, 7, 4316-4323 (2013) · Zbl 1364.94179
[39] Li, Y. F.; Zhang, Y. J.; Huang, Z. H., Reweighted nuclear norm minimization algorithm for low rank matrix recovery, J. Comput. Appl. Math., 263, 338-350 (2014) · Zbl 1301.65049
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.