# zbMATH — the first resource for mathematics

Noisy matrix completion: understanding statistical guarantees for convex relaxation via nonconvex optimization. (English) Zbl 07271856

##### MSC:
 90C25 Convex programming 90C26 Nonconvex programming, global optimization
##### Software:
FarmTest; SDPLR; softImpute
Full Text:
##### References:
 [1] E. Abbe, J. Fan, K. Wang, and Y. Zhong, Entrywise eigenvector analysis of random matrices with low expected rank, Ann. Statist., 48 (2020), pp. 1452-1474. · Zbl 1450.62066 [2] A. Ahmed, B. Recht, and J. Romberg, Blind deconvolution using convex programming, IEEE Trans. Inform. Theory, 60 (2014), pp. 1711-1732. · Zbl 1360.94057 [3] J. Bai and S. Ng, Confidence intervals for diffusion index forecasts and inference for factor-augmented regressions, Econometrica, 74 (2006), pp. 1133-1150. · Zbl 1152.91721 [4] A. I. Barvinok, Problems of distance geometry and convex properties of quadratic maps, Discrete Comput. Geom., 13 (1995), pp. 189-202. · Zbl 0829.05025 [5] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), pp. 183-202. · Zbl 1175.94009 [6] N. Boumal, V. Voroninski, and A. Bandeira, The non-convex Burer-Monteiro approach works on smooth semidefinite programs, in Advances in Neural Information Processing Systems, Curran Associates, Red Hook, NY, 2016, pp. 2757-2765. [7] S. Burer and R. D. Monteiro, A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization, Math. Program., 95 (2003), pp. 329-357. · Zbl 1030.90077 [8] J.-F. Cai, E. J. Candès, and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), pp. 1956-1982. · Zbl 1201.90155 [9] J.-F. Cai, T. Wang, and K. Wei, Fast and provable algorithms for spectrally sparse signal reconstruction via low-rank Hankel matrix completion, Appl. Comput. Harmon. Anal., 46 (2019), pp. 94-121. · Zbl 1442.94017 [10] T. T. Cai and W.-X. Zhou, Matrix completion via max-norm constrained optimization, Electron. J. Stat., 10 (2016), pp. 1493-1525. · Zbl 1342.62091 [11] E. Candès, X. Li, Y. Ma, and J. Wright, Robust principal component analysis?, J. ACM, 58 (2011), 11. · Zbl 1327.62369 [12] E. Candès, X. Li, and M. Soltanolkotabi, Phase retrieval via Wirtinger flow: Theory and algorithms, IEEE Trans. Inform. Theory, 61 (2015), pp. 1985-2007. · Zbl 1359.94069 [13] E. Candès and Y. Plan, Matrix completion with noise, Proc. IEEE, 98 (2010), pp. 925 –936. [14] E. Candès and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), pp. 717-772. · Zbl 1219.90124 [15] E. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inform. Theory, 56 (2010), pp. 2053 –2080. · Zbl 1366.15021 [16] Y. Cao and Y. Xie, Poisson matrix recovery and completion, IEEE Trans. Signal Process., 64 (2016), pp. 1609-1620. · Zbl 1412.94024 [17] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky, Rank-sparsity incoherence for matrix decomposition, SIAM J. Optim., 21 (2011), pp. 572-596. · Zbl 1226.90067 [18] J. Chen and X. Li, Model-free nonconvex matrix completion: Local minima analysis and applications in memory-efficient Kernel PCA, J. Mach. Learn. Res., 20 (2010), 39. · Zbl 1441.62157 [19] J. Chen, D. Liu, and X. Li, Nonconvex rectangular matrix completion via gradient descent without $$\ell_{2,\infty}$$ regularization, IEEE Trans. Inform. Theory, 66 (2020), pp. 5806-5841. · Zbl 1448.90078 [20] Y. Chen, Incoherence-optimal matrix completion, IEEE Trans. Inform. Theory, 61 (2015), pp. 2909-2923. · Zbl 1359.15022 [21] Y. Chen and E. Candès, The projected power method: An efficient algorithm for joint alignment from pairwise differences, Comm. Pure Appl. Math., 71 (2018), pp. 1648-1714. · Zbl 06919696 [22] Y. Chen and E. J. Candès, Solving random quadratic systems of equations is nearly as easy as solving linear systems, Comm. Pure Appl. Math., 70 (2017), pp. 822-883, https://doi.org/10.1002/cpa.21638. · Zbl 1379.90024 [23] Y. Chen, C. Cheng, and J. Fan, Asymmetry helps: Eigenvalue and eigenvector analyses of asymmetrically perturbed low-rank matrices, Ann. Statist., to appear. [24] Y. Chen and Y. Chi, Robust spectral compressed sensing via structured matrix completion, IEEE Trans. Inform. Theory, 60 (2014), pp. 6576-6601. · Zbl 1360.94064 [25] Y. Chen and Y. Chi, Harnessing structures in big data via guaranteed low-rank matrix estimation: Recent theory and fast algorithms via convex and nonconvex optimization, IEEE Signal Process. Mag., 35 (2018), pp. 14-31, https://doi.org/10.1109/MSP.2018.2821706. [26] Y. Chen, Y. Chi, J. Fan, and C. Ma, Gradient descent with random initialization: Fast global convergence for nonconvex phase retrieval, Math. Program., 176 (2019), pp. 5-37. · Zbl 1415.90086 [27] Y. Chen, J. Fan, C. Ma, and K. Wang, Spectral method and regularized MLE are both optimal for top-$$K$$ ranking, Ann. Statist., 47 (2019), pp. 2204-2235. · Zbl 1425.62038 [28] Y. Chen, J. Fan, C. Ma, and Y. Yan, Inference and uncertainty quantification for noisy matrix completion, Proc. Natl. Acad. Sci. USA, 116 (2019), pp. 22931-22937. · Zbl 1431.90117 [29] Y. Chen, L. J. Guibas, and Q. Huang, Near-optimal joint optimal matching via convex relaxation, in International Conference on Machine Learning (ICML), ACM, New York, 2014, pp. 100-108. [30] Y. Chen, A. Jalali, S. Sanghavi, and C. Caramanis, Low-rank matrix recovery from errors and erasures, IEEE Trans. Inform. Theory, 59 (2013), pp. 4324-4337. [31] Y. Chen and M. J. Wainwright, Fast Low-Rank Estimation by Projected Gradient Descent: General Statistical and Algorithmic Guarantees, preprint, https://arxiv.org/abs/1509.03025, 2015. [32] Y. Cheng and R. Ge, Non-convex matrix completion against a semi-random adversary, Proc. Mach. Learn. Res., 75 (2018), pp. 1362-1394. [33] Y. Chi, Y. M. Lu, and Y. Chen, Nonconvex optimization meets low-rank matrix factorization: An overview, IEEE Trans. Signal Process., 67 (2019), pp. 5239-5269. · Zbl 07123429 [34] M. A. Davenport and J. Romberg, An overview of low-rank matrix recovery from incomplete observations, IEEE J. Sel. Topics Signal Process., 10 (2016), pp. 608-622. [35] L. Ding and Y. Chen, The Leave-One-Out Approach for Matrix Completion: Primal and Dual Analysis, preprint, https://arxiv.org/abs/1803.07554, 2018. [36] B. Efron, Correlation and large-scale simultaneous significance testing, J. Amer. Statist. Assoc., 102 (2007), pp. 93-103. · Zbl 1284.62340 [37] B. Efron, Correlated z-values and the accuracy of large-scale statistical estimates, J. Amer. Statist. Assoc., 105 (2010), pp. 1042-1055. · Zbl 1390.62139 [38] N. El Karoui, On the impact of predictor geometry on the performance on high-dimensional ridge-regularized generalized robust regression estimators, Probab. Theory Related Fields, 170 (2018), pp. 95-175. · Zbl 1407.62060 [39] J. Fan, X. Han, and W. Gu, Estimating false discovery proportion under arbitrary covariance dependence, J. Amer. Statist. Assoc., 107 (2012), pp. 1019-1035. · Zbl 1395.62219 [40] J. Fan, Y. Ke, Q. Sun, and W.-X. Zhou, Farmtest: Factor-adjusted robust multiple testing with approximate false discovery control, J. Amer. Statist. Assoc., 114 (2019), pp. 1880-1893. · Zbl 1428.62345 [41] J. Fan, Y. Ke, and K. Wang, Factor-adjusted regularized model selection, J. Econometrics, 216 (2020), pp. 71-85. · Zbl 1456.62114 [42] J. Fan, Y. Liao, and M. Mincheva, Large covariance estimation by thresholding principal orthogonal complements, J. R. Stat. Soc. Ser. B Stat. Methodol., 75 (2013), pp. 603-680. · Zbl 1411.62138 [43] J. Fan, W. Wang, and Y. Zhong, Robust covariance estimation for approximate factor models, J. Econometrics, 208 (2019), pp. 5-22. · Zbl 1452.62410 [44] J. Fan, L. Xue, and J. Yao, Sufficient forecasting using factor models, J. Econometrics, 201 (2017), pp. 292-306. · Zbl 1377.62185 [45] M. Fazel, Matrix Rank Minimization with Applications, PhD thesis, Stanford University, Stanford, CA, 2002. [46] M. Fazel, H. Hindi, and S. Boyd, Rank minimization and applications in system theory, in American Control Conference, Vol. 4, IEEE, Piscataway, NJ, 2004, pp. 3273-3278. [47] M. Fazel, H. Hindi, and S. P. Boyd, Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices, in American Control Conference, IEEE, Piscataway, NJ, 2003, pp. 2156-2162. [48] M. Fornasier, H. Rauhut, and R. Ward, Low-rank matrix recovery via iteratively reweighted least squares minimization, SIAM J. Optim., 21 (2011), pp. 1614-1640. · Zbl 1236.65044 [49] R. Ge, C. Jin, and Y. Zheng, No spurious local minima in nonconvex low rank problems: A unified geometric analysis, International Conference on Machine Learning, ACM, New York, 2017, pp. 1233-1242. [50] R. Ge, J. D. Lee, and T. Ma, Matrix completion has no spurious local minimum, in Advances in Neural Information Processing Systems, 2016, Curran Associates, Red Hook, NY, pp. 2973-2981. [51] D. Gross, Recovering low-rank matrices from few coefficients in any basis, IEEE Trans. Inform. Theory, 57 (2011), pp. 1548-1566. · Zbl 1366.94103 [52] S. Gunasekar, A. Acharya, N. Gaur, and J. Ghosh, Noisy matrix completion using alternating minimization, in Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Springer, Berlin, 2013, pp. 194-209. [53] M. Hardt, Understanding alternating minimization for matrix completion, in Foundations of Computer Science (FOCS), IEEE, Piscataway, NJ, 2014, pp. 651-660. [54] Q.-X. Huang and L. Guibas, Consistent shape maps via semidefinite programming, Comput. Graph. Forum, 32 (2013), pp. 177-186. [55] P. Jain, R. Meka, and I. S. Dhillon, Guaranteed rank minimization via singular value projection, in Advances in Neural Information Processing Systems, Curran Associates, Red Hook, NY, 2010, pp. 937-945. [56] P. Jain, P. Netrapalli, and S. Sanghavi, Low-rank matrix completion using alternating minimization, in ACM Symposium on Theory of Computing, ACM, New York, 2013, pp. 665-674. · Zbl 1293.65073 [57] C. Jin, S. M. Kakade, and P. Netrapalli, Provable efficient online matrix completion via non-convex stochastic gradient descent, in Advances in Neural Information Processing Systems, Curran Associates, Red Hook, NY, 2016, pp. 4520-4528. [58] I. T. Jolliffe, A note on the use of principal components in regression, J. R. Stat. Soc. Ser. C Appl. Stat., 31 (1982), pp. 300-303. [59] P. Jung, F. Krahmer, and D. Stöger, Blind demixing and deconvolution at near-optimal rate, IEEE Trans. Inform. Theory, 64 (2017), pp. 704-727. · Zbl 1390.94235 [60] R. H. Keshavan, A. Montanari, and S. Oh, Matrix completion from a few entries, IEEE Trans. Inform. Theory, 56 (2010), 2980 –2998. · Zbl 1366.62111 [61] R. H. Keshavan, A. Montanari, and S. Oh, Matrix completion from noisy entries, J. Mach. Learn. Res., 11 (2010), pp. 2057-2078. · Zbl 1242.62069 [62] O. Klopp, Noisy low-rank matrix completion with general sampling distribution, Bernoulli, 20 (2014), pp. 282-303. · Zbl 1400.62115 [63] A. Kneip and P. Sarda, Factor models and variable selection in high-dimensional regression analysis, Ann. Statist., 39 (2011), pp. 2410-2447. · Zbl 1231.62131 [64] V. Koltchinskii, K. Lounici, and A. B. Tsybakov, Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion, Ann. Statist., 39 (2011), pp. 2302-2329, https://doi.org/10.1214/11-AOS894. · Zbl 1231.62097 [65] M.-J. Lai, Y. Xu, and W. Yin, Improved iteratively reweighted least squares for unconstrained smoothed $$\ell_q$$ minimization, SIAM J. Numer. Anal., 51 (2013), pp. 927-957. · Zbl 1268.49038 [66] Q. Li and G. Tang, Approximate support recovery of atomic line spectral estimation: A tale of resolution and precision, Appl. Comput. Harmon. Anal., 48 (2020), pp. 891-948. · Zbl 1436.62081 [67] Y. Li, C. Ma, Y. Chen, and Y. Chi, Nonconvex matrix factorization from rank-one measurements, Proc. Mach. Learn. Res., 89 (2019), pp. 1496-1505. [68] S. Ling and T. Strohmer, Self-calibration and biconvex compressive sensing, Inverse Problems, 31 (2015), 115002. · Zbl 1327.93183 [69] S. Ling and T. Strohmer, Blind deconvolution meets blind demixing: Algorithms and performance bounds, IEEE Trans. Inform. Theory, 63 (2017), pp. 4497-4520. · Zbl 1370.94583 [70] Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. Appl., 31 (2009), pp. 1235-1256. · Zbl 1201.90151 [71] C. Ma, K. Wang, Y. Chi, and Y. Chen, Implicit regularization in nonconvex statistical estimation: Gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution, Found. Comput. Math., 20 (2020), pp. 451-632. · Zbl 1445.90089 [72] S. Ma, D. Goldfarb, and L. Chen, Fixed point and Bregman iterative methods for matrix rank minimization, Math. Program., 128 (2011), pp. 321-353. · Zbl 1221.65146 [73] R. Mazumder, T. Hastie, and R. Tibshirani, Spectral regularization algorithms for learning large incomplete matrices, J. Mach. Learn. Res., 11 (2010), pp. 2287-2322. · Zbl 1242.68237 [74] S. Negahban and M. J. Wainwright, Restricted strong convexity and weighted matrix completion: Optimal bounds with noise, J. Mach. Learn. Res., 13 (2012), pp. 1665-1697. · Zbl 1436.62204 [75] Y. Nesterov, How to make the gradients small, Optima, 88 (2012), pp. 10-11. [76] N. Parikh and S. Boyd, Proximal algorithms, Found. Trends Optim., 1 (2014), pp. 127-239. [77] D. Park, A. Kyrillidis, C. Carmanis, and S. Sanghavi, Non-square matrix sensing without spurious local minima via the Burer-Monteiro approach, Proc. Mach. Learn. Res., 54 (2017), pp. 65-74. [78] D. Paul, E. Bair, T. Hastie, and R. Tibshirani, “Preconditioning” for feature selection and regression in high-dimensional problems, Ann. Statist., 36 (2008), pp. 1595-1618. · Zbl 1142.62022 [79] B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., 12 (2011), pp. 3413-3430. · Zbl 1280.68141 [80] B. Recht, M. Fazel, and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), pp. 471-501. · Zbl 1198.90321 [81] J. D. Rennie and N. Srebro, Fast maximum margin matrix factorization for collaborative prediction, in International conference on Machine Learning, ACM, New York, 2005, pp. 713-719. [82] A. Rohde and A. B. Tsybakov, Estimation of high-dimensional low-rank matrices, Ann. Statist., 39 (2011), pp. 887-930. · Zbl 1215.62056 [83] A. Shapiro, Y. Xie, and R. Zhang, Matrix completion with deterministic pattern: A geometric perspective, IEEE Trans. Signal Process., 67 (2019), pp. 1088-1103. · Zbl 1414.15034 [84] A. Singer, Angular synchronization by eigenvectors and semidefinite programming, Appl. Comput. Harmon. Anal., 30 (2011), pp. 20-36. · Zbl 1206.90116 [85] A. M.-C. So and Y. Ye, Theory of semidefinite programming for sensor network localization, Math. Program., 109 (2007), pp. 367-384. · Zbl 1278.90482 [86] N. Srebro and A. Shraibman, Rank, trace-norm and max-norm, in International Conference on Computational Learning Theory, Springer, Berlin, 2005, pp. 545-560. · Zbl 1137.68563 [87] R. Sun and Z.-Q. Luo, Guaranteed matrix completion via non-convex factorization, IEEE Trans. Inform. Theory, 62 (2016), pp. 6535-6579. · Zbl 1359.94179 [88] P. Sur, Y. Chen, and E. J. Candès, The likelihood ratio test in high-dimensional logistic regression is asymptotically a rescaled $$c$$ hi-square, Probab. Theory Related Fields, 175 (2019), pp. 487-558. · Zbl 1431.62319 [89] K.-C. Toh and S. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems, Pac. J. Optim., 6 (2010), pp. 615-640. · Zbl 1205.90218 [90] C. Tomasi and T. Kanade, Shape and motion from image streams under orthography: A factorization method, Int. J. Comput. Vis., 9 (1992), pp. 137-154. [91] S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi, and B. Recht, Low-rank solutions of linear matrix equations via Procrustes flow, in International Conference on Machine Learning, ACM, New York, 2016, pp. 964-973. [92] B. Vandereycken, Low-rank matrix completion by Riemannian optimization, SIAM J. Optim., 23 (2013), pp. 1214-1236. · Zbl 1277.15021 [93] R. Vershynin, Introduction to the non-asymptotic analysis of random matrices, in Compressed Sensing, Theory and Applications, Cambridge University Press, Cambridge, 2012, pp. 210-268. [94] L. Wang, X. Zhang, and Q. Gu, A unified computational and statistical framework for nonconvex low-rank matrix estimation, Proc. Mach. Learn. Res., 54 (2017), pp. 981-990. [95] K. Wei, J.-F. Cai, T. F. Chan, and S. Leung, Guarantees of Riemannian optimization for low rank matrix recovery, SIAM J. Matrix Anal. Appl., 37 (2016), pp. 1198-1222. · Zbl 1347.65109 [96] Z. Wen, W. Yin, and Y. Zhang, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Math. Program. Comput., 4 (2012), pp. 333-361. · Zbl 1271.65083 [97] H. Zhang, Y. Zhou, Y. Liang, and Y. Chi, A nonconvex approach for phase retrieval: Reshaped Wirtinger flow and incremental algorithms, J. Mach. Learn. Res., 18 (2017), pp. 5164-5198. [98] T. Zhang, J. M. Pauly, and I. R. Levesque, Accelerating parameter mapping with a locally low rank constraint, Magn. Resonance Med., 73 (2015), pp. 655-661. [99] T. Zhao, Z. Wang, and H. Liu, A nonconvex optimization framework for low rank matrix estimation, in Advances in Neural Information Processing Systems, Curran Associates, Red Hook, NY, 2015, pp. 559-567. [100] Q. Zheng and J. Lafferty, Convergence Analysis for Rectangular Matrix Completion using Burer-Monteiro Factorization and Gradient Descent, preprint, https://arxiv.org/abs/1605.07051, 2016. [101] Y. Zhong and N. Boumal, Near-optimal bounds for phase synchronization, SIAM J. Optim., 28 (2018), pp. 989-1016. · Zbl 1396.90068 [102] Z. Zhou, X. Li, J. Wright, E. Candès, and Y. Ma, Stable principal component pursuit, in International Symposium on Information Theory, IEEE, Piscataway, NJ, 2010, pp. 1518-1522.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.