Matrix rigidity and the ill-posedness of robust PCA and matrix completion. (English) Zbl 1502.62074

Summary: Robust principal component analysis (RPCA) [E. J. Candès et al., J. ACM 58, No. 3, Article No. 11, 37 p. (2011; Zbl 1327.62369)] and low-rank matrix completion [B. Recht et al., SIAM Rev. 52, No. 3, 471–501 (2010; Zbl 1198.90321)] are extensions of PCA that allow for outliers and missing entries, respectively. It is well known that solving these problems requires a low coherence between the low-rank matrix and the canonical basis, since in the extreme cases-when the low-rank matrix we wish to recover is also sparse – there is an inherent ambiguity. However, in both problems the well-posedness issue is even more fundamental; in some cases, both RPCA and matrix completion can fail to have any solutions due to the set of low-rank plus sparse matrices not being closed, which in turn is equivalent to the notion of the matrix rigidity function not being lower semicontinuous [A. Kumar et al., Comput. Complexity 23, No. 4, 531–563 (2014; Zbl 1366.68076)]. By constructing infinite families of matrices, we derive bounds on the rank and sparsity such that the set of low-rank plus sparse matrices is not closed. We also demonstrate numerically that a wide range of nonconvex algorithms for both RPCA and matrix completion have diverging components when applied to our constructed matrices. This is analogous to the case of sets of higher order tensors not being closed under canonical polyadic (CP) tensor rank, rendering the best low-rank tensor approximation unsolvable [V. de Silva and L.-H. Lim, SIAM J. Matrix Anal. Appl. 30, No. 3, 1084–1127 (2008; Zbl 1167.14038)] and hence encouraging the use of multilinear tensor rank [L. De Lathauwer et al., SIAM J. Matrix Anal. Appl. 21, No. 4, 1324–1342 (2000; Zbl 0958.15026)].


62H25 Factor analysis and principal components; correspondence analysis
62F35 Robustness and adaptive procedures (parametric inference)
65F22 Ill-posedness and regularization problems in numerical linear algebra
65F50 Computational methods for sparse matrices


Full Text: DOI arXiv


[1] H. Abdi and L. J. Williams, Principal component analysis, Wiley Interdiscip. Rev. Comput. Stat., 2 (2010), pp. 433-459.
[2] S. H. Baete, J. Chen, Y.-C. Lin, X. Wang, R. Otazo, and F. E. Boada, Low rank plus sparse decomposition of ODFs for improved detection of group-level differences and variable correlations in white matter, NeuroImage, 174 (2018), pp. 138-152.
[3] J. D. Blanchard, J. Tanner, and K. Wei, CGIHT: Conjugate gradient iterative hard thresholding for compressed sensing and matrix completion, Inf. Inference, 4 (2015), pp. 289-327. · Zbl 1380.94045
[4] N. Boumal, V. Voroninski, and A. S. Bandeira, The non-convex Burer-Monteiro approach works on smooth semidefinite programs, in Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 2765-2773.
[5] T. Bouwmans, A. Sobral, S. Javed, S. K. Jung, and E.-H. Zahzah, Decomposition into low-rank plus additive matrices for background/foreground separation: A review for a comparative evaluation with a large-scale dataset, Comput. Sci. Rev., 23 (2017), pp. 1-71. · Zbl 1398.68572
[6] S. Burer and R. D. Monteiro, A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization, Math. Program., 95 (2003), pp. 329-357. · Zbl 1030.90077
[7] J.-F. Cai, E. J. Candès, and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), pp. 1956-1982, https://doi.org/10.1137/080738970. · Zbl 1201.90155
[8] E. J. Candès, X. Li, Y. Ma, and J. Wright, Robust principal component analysis?, J. ACM, 58 (2011), pp. 1-37. · Zbl 1327.62369
[9] E. J. Candès and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), pp. 717-772. · Zbl 1219.90124
[10] E. J. Candes and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inform. Theory, 56 (2010), pp. 2053-2080. · Zbl 1366.15021
[11] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky, Rank-sparsity incoherence for matrix decomposition, SIAM J. Optim., 21 (2011), pp. 572-596, https://doi.org/10.1137/090761793. · Zbl 1226.90067
[12] Y. Chen, Y. Guo, Y. Wang, D. Wang, C. Peng, and G. He, Denoising of hyperspectral images using nonconvex low rank matrix approximation, IEEE Trans. Geosci. Remote Sensing, 55 (2017), pp. 5366-5380.
[13] L. De Lathauwer, B. De Moor, and J. Vandewalle, On the best rank-1 and rank-\((R_1, R_2,\ldots, R_N)\) approximation of higher-order tensors, SIAM J. Matrix Anal. Appl., 21 (2000), pp. 1324-1342, https://doi.org/10.1137/S0895479898346995. · Zbl 0958.15026
[14] V. de Silva and L.-H. Lim, Tensor rank and the ill-posedness of the best low-rank approximation problem, SIAM J. Matrix Anal. Appl., 30 (2008), pp. 1084-1127, https://doi.org/10.1137/06066518X. · Zbl 1167.14038
[15] J. W. Demmel, Applied Numerical Linear Algebra, SIAM, 1997, https://doi.org/10.1137/1.9781611971446. · Zbl 0879.65017
[16] P. Drineas, R. Kannan, and M. W. Mahoney, Fast Monte Carlo algorithms for matrices II: Computing a low-rank approximation to a matrix, SIAM J. Comput., 36 (2006), pp. 158-183, https://doi.org/10.1137/S0097539704442696. · Zbl 1111.68148
[17] A. Dutta, F. Hanzely, and P. Richtárik, A Nonconvex Projection Method for Robust PCA, preprint, https://arxiv.org/abs/1805.07962, 2018.
[18] H. Gao, J.-F. Cai, Z. Shen, and H. Zhao, Robust principal component analysis-based four-dimensional computed tomography, Phys. Med. Biol., 56 (2011), pp. 3181-3198.
[19] R. Ge, C. Jin, and Y. Zheng, No spurious local minima in nonconvex low rank problems: A unified geometric analysis, in Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, Proc. Mach. Learn. Res. 70, 2017, pp. 1233-1242.
[20] A. Gogna, A. Shukla, H. K. Agarwal, and A. Majumdar, Split Bregman algorithms for sparse/joint-sparse and low-rank signal recovery: Application in compressive hyperspectral imaging, in Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), IEEE, 2014, pp. 1302-1306.
[21] C. Goodall and I. T. Jolliffe, Principal Component Analysis, Springer, 2002. · Zbl 1011.62064
[22] Q. Gu and Z. Wang, Low-rank and sparse structure pursuit via alternating minimization, in Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, Cadiz, Spain, Proc. Mach. Learn. Res. 51, 2016, pp. 600-609.
[23] S. Gu, L. Zhang, W. Zuo, and X. Feng, Weighted nuclear norm minimization with application to image denoising, in Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2014, pp. 2862-2869.
[24] J. Haldar and D. Hernando, Rank-constrained solutions to linear matrix equations using PowerFactorization, IEEE Signal Process. Lett., 16 (2009), pp. 584-587.
[25] N. Halko, P. G. Martinsson, and J. A. Tropp, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, SIAM Rev., 53 (2011), pp. 217-288, https://doi.org/10.1137/090771806. · Zbl 1269.65043
[26] E. Han, P. Carbonetto, R. E. Curtis, Y. Wang, J. M. Granka, J. Byrnes, K. Noto, A. R. Kermany, N. M. Myres, M. J. Barber, K. A. Rand, S. Song, T. Roman, E. Battat, E. Elyashiv, H. Guturu, E. L. Hong, K. G. Chahine, and C. A. Ball, Clustering of 770,000 genomes reveals post-colonial population structure of North America, Nature Commun., 8 (2017), 14238.
[27] F. L. Hitchcock, The expression of a tensor or a polyadic as a sum of products, J. Math. Phys., 6 (1927), pp. 164-189. · JFM 53.0095.01
[28] F. L. Hitchcock, Multiple invariants and generalized rank of a p-way matrix or tensor, J. Math. Phys., 7 (1928), pp. 39-79. · JFM 53.0097.01
[29] L.-C. Hsu, C.-Y. Huang, Y.-H. Chuang, H.-W. Chen, Y.-T. Chan, H. Y. Teah, T.-Y. Chen, C.-F. Chang, Y.-T. Liu, and Y.-M. Tzou, Accumulation of heavy metals and trace elements in fluvial sediments received effluents from traditional and semiconductor industries, Sci. Rep., 6 (2016), 34250.
[30] A. Kumar, S. V. Lokam, V. M. Patankar, and M. N. J. Sarma, Using elimination theory to construct rigid matrices, Comput. Complex., 23 (2014), pp. 531-563. · Zbl 1366.68076
[31] Z. Lin, M. chen, and Y. Ma, The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices, preprint, https://arxiv.org/abs/1009.5055, 2010.
[32] A. Kyrillidis and V. Cevher, Matrix recipes for hard thresholding methods, J. Math. Imaging Vision, 48 (2014), pp. 235-265. · Zbl 1311.90141
[33] K. Lee and Y. Bresler, ADMiRA: Atomic decomposition for minimum rank approximation, IEEE Trans. Inform. Theory, 56 (2010), pp. 4402-4416. · Zbl 1366.94112
[34] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma, Robust recovery of subspace structures by low-rank representation, IEEE Trans. Pattern Anal. Mach. Intell., 35 (2013), pp. 171-184.
[35] X. Luan, B. Fang, L. Liu, W. Yang, and J. Qian, Extracting sparse error of robust PCA for face recognition in the presence of varying illumination and occlusion, Pattern Recognition, 47 (2014), pp. 495-508.
[36] J. Miehlbradt, A. Cherpillod, S. Mintchev, M. Coscia, F. Artoni, D. Floreano, and S. Micera, Data-driven bodymachine interface for the accurate control of drones, Proc. Natl. Acad. Sci. USA, 115 (2018), pp. 7913-7918.
[37] P. Netrapalli, U. N. Niranjan, S. Sanghavi, A. Anandkumar, and P. Jain, Non-convex robust PCA, in Advances in Neural Information Processing Systems 27, Curran Associates, Inc., 2014, pp. 1107-1115.
[38] O. Oreifej, X. Li, and M. Shah, Simultaneous video stabilization and moving object detection in turbulence, IEEE Trans. Pattern Anal. Mach. Intell., 35 (2013), pp. 450-462.
[39] R. Otazo, E. Candès, and D. K. Sodickson, Low-rank plus sparse matrix decomposition for accelerated dynamic MRI with separation of background and dynamic components, Magnetic Resonance Med., 73 (2015), pp. 1125-1136.
[40] C. Plesa, A. M. Sidore, N. B. Lubock, D. Zhang, and S. Kosuri, Multiplexed gene synthesis in emulsions for exploring protein functional landscapes, Science, 359 (2018), pp. 343-347.
[41] B. Recht, M. Fazel, and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), pp. 471-501, https://doi.org/10.1137/070697835. · Zbl 1198.90321
[42] M. Ringnér, What is principal component analysis?, Nature Biotech., 26 (2008), pp. 303-304.
[43] D. Sabushimike, S. Na, J. Kim, N. Bui, K. Seo, and G. Kim, Low-rank matrix recovery approach for clutter rejection in real-time IR-UWB radar-based moving target detection, Sensors, 16 (2016), 1409.
[44] J. Tanner and K. Wei, Normalized iterative hard thresholding for matrix completion, SIAM J. Sci. Comput., 35 (2013), pp. S104-S125, https://doi.org/10.1137/120876459. · Zbl 1282.65043
[45] J. Tanner and K. Wei, Low rank matrix completion by alternating steepest descent methods, Appl. Comput. Harmon. Anal., 40 (2016), pp. 417-429. · Zbl 1336.65047
[46] L. G. Valiant, Graph-theoretic arguments in low-level complexity, in Mathematical Foundations of Computer Science, Lecture Notes in Comput. Sci. 53, J. Gruska, ed., Springer, 1977, pp. 162-176. · Zbl 0384.68046
[47] N. Vaswani, Y. Chi, and T. Bouwmans, Rethinking PCA for modern data sets: Theory, algorithms, and applications, Proc. IEEE, 106 (2018), pp. 1274-1276.
[48] A. E. Waters, A. C. Sankaranarayanan, and R. G. Baraniuk, SpaRCS: Recovering low-rank and sparse matrices from compressive measurements, in Proceedings of the 24th International Conference on Neural Information Processing Systems, Vol. 2, Granada, Spain, 2011, pp. 1089-1097.
[49] W. Wei, L. Zhang, Y. Zhang, C. Wang, and C. Tian, Hyperspectral image denoising from an incomplete observation, in Proceedings of the 2015 International Conference on Orange Technologies (ICOT), IEEE, 2015, pp. 177-180.
[50] Z. Wen, W. Yin, and Y. Zhang, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Math. Program. Comput., 4 (2012), pp. 333-361. · Zbl 1271.65083
[51] D. P. Woodruff, Sketching as a tool for numerical linear algebra, Found. Trends Theor. Comput. Sci., 10 (2014), pp. 1-157. · Zbl 1316.65046
[52] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma, Robust face recognition via sparse representation, IEEE Trans. Pattern Anal. Mach. Intell., 31 (2009), pp. 210-227.
[53] F. Xu, J. Han, Y. Wang, M. Chen, Y. Chen, G. He, and Y. Hu, Dynamic magnetic resonance imaging via nonconvex low-rank matrix approximation, IEEE Access, 5 (2017), pp. 1958-1966.
[54] X. Yi, D. Park, Y. Chen, and C. Caramanis, Fast algorithms for robust PCA via gradient descent, in Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 4159-4167.
[55] X. Zhang, L. Wang, and Q. Gu, A unified framework for low-rank plus sparse matrix recovery, in Proceedings of the 21st International Conference on Artificial Intelligence and Statistics, Proc. Mach. Learn. Res. 84, Playa Blanca, Lanzarote, Canary Islands, 2018, pp. 1097-1107.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.