zbMATH — the first resource for mathematics

Bayesian inference for spectral projectors of the covariance matrix. (English) Zbl 1406.62064
Summary: Let \(X_{1},\ldots,X_{n}\) be an i.i.d. sample in \(\mathbb{R}^{p}\) with zero mean and the covariance matrix \(\boldsymbol{\Sigma}^\ast\). The classical PCA approach recovers the projector \(\boldsymbol{P}^\ast_{\mathcal{J}}\) onto the principal eigenspace of \(\boldsymbol{\Sigma}^\ast\) by its empirical counterpart \(\widehat{\boldsymbol{P}}_{\mathcal{J}}\). Recent paper [V. Koltchinskii and K. Lounici, Ann. Stat. 45, No. 1, 121–157 (2017; Zbl 1367.62175)] investigated the asymptotic distribution of the Frobenius distance between the projectors \(\|\widehat{\boldsymbol{P}}_{\mathcal{J}}-\boldsymbol{P}^\ast_{\mathcal{J}}\|_{2}\), while [A. Naumov et al., “Bootstrap confidence sets for spectral projectors of sample covariance”, Preprint, arXiv:1703.00871] offered a bootstrap procedure to measure uncertainty in recovering this subspace \(\boldsymbol{P}^\ast_{\mathcal{J}}\) even in a finite sample setup. The present paper considers this problem from a Bayesian perspective and suggests to use the credible sets of the pseudo-posterior distribution on the space of covariance matrices induced by the conjugated Inverse Wishart prior as sharp confidence sets. This yields a numerically efficient procedure. Moreover, we theoretically justify this method and derive finite sample bounds on the corresponding coverage probability. Contrary to [Koltchinskii and Lounici, loc. cit.; Naumov et al., loc. cit.], the obtained results are valid for non-Gaussian data: the main assumption that we impose is the concentration of the sample covariance \(\widehat{\boldsymbol{\Sigma}}\) in a vicinity of \(\boldsymbol{\Sigma}^\ast\). Numerical simulations illustrate good performance of the proposed procedure even on non-Gaussian data in a rather challenging regime.
62H25 Factor analysis and principal components; correspondence analysis
62F15 Bayesian inference
62G20 Asymptotic properties of nonparametric inference
Full Text: DOI Euclid arXiv
[1] Adamczak, R., Litvak, A. E., Pajor, A. and Tomczak-Jaegermann, N. (2010). Quantitative estimates of the convergence of the empirical covariance matrix in log-concave ensembles., J. Amer. Math. Soc., 23, 535-561. · Zbl 1206.60006
[2] Bentkus, V. (2005). A Lyapunov-type bound in \(\mathbbR ^d\)., Theory Probab. Appl., 49, 2, 311-323. · Zbl 1090.60019
[3] Berthet, Q. and Rigollet, P. (2013). Optimal detection of sparse principal components in high dimension., Ann. Statist., 41, 4, 1780-1815. · Zbl 1277.62155
[4] Bickel, P. J. and Kleijn, B. J. K. (2012). The semiparametric Bernstein – von Mises theorem., Ann. Statist., 40, 206-237. · Zbl 1246.62081
[5] Birnbaum, A., Johnstone, I. M., Nadler, B. and Paul, D. (2013). Minimax bounds for sparse PCA with noisy high-dimensional data., Ann. Statist., 41, 3, 1055-1084. · Zbl 1292.62071
[6] Cai, T. T., Ma, Z. and Wu, Y. (2013). Sparse PCA: optimal rates and adaptive estimation., Ann. Statist., 41, 6, 3074-3110. · Zbl 1288.62099
[7] Castillo, I. and Nickl, R. (2013). Nonparametric Bernstein – von Mises theorems in Gaussian white noise., Ann. Statist., 41, 4, 1999-2028. · Zbl 1285.62052
[8] Castillo, I. and Rousseau, J. (2015). A Bernstein – von Mises theorem for smooth functionals in semiparametric models., Ann. Statist., 43, 6, 2353-2383. · Zbl 1327.62302
[9] El Karoui, N. (2007). Tracy - Widom limit for the largest eigenvalue of a large class of complex sample covariance matrices., Ann. Probab., 35, 2, 663-714. · Zbl 1117.60020
[10] Fan, J., Rigollet, P. and Wang, W. (2015). Estimation of functionals of sparse covariance matrices., Ann. Statist., 43, 6, 2706-2737. · Zbl 1327.62338
[11] Fan, J., Sun, Q., Zhou, W-X. and Zhu, Z. (2018). Principal component analysis for big data., arXiv:1801.01602.
[12] Holtz, M. (2010)., Sparse grid quadrature in high dimensions with applications in finance and insurance. Lecture notes in computational science and engineering, 77, Springer, Berlin.
[13] Gao, C. and Zhou, H. H. (2015). Rate-optimal posterior contraction for sparse PCA., Ann. Statist., 43, 2, 785-818. · Zbl 1312.62078
[14] Ghosh, J. K. and Basu, A. (2017) General Robust Bayes Pseudo-Posterior: Exponential Convergence results with Applications, arXiv: 1708.09692.
[15] Ghosh, J. K. and Ramamoorthi, R. V. (2003). Introduction to the non-asymptotic analysis of random matrices. In, Bayesian nonparametrics, Springer Verlag, New York.
[16] Goetze, F., Naumov, A., Spokoiny, V. and Ulyanov, V. (2018). Large ball probabilities, Gaussian comparison and anti-concentration., arXiv:1708.08663.
[17] Goodfellow, I., Bengio, Y. and Courville, A. (2016)., Deep learning, MIT Press. · Zbl 1373.68009
[18] Johnstone, I. M. and Lu, A. Y. (2009). On consistency and sparsity for principal components analysis in high dimensions., J. Amer. Statist. Assoc., 104, 682-693. · Zbl 1388.62174
[19] Johnstone, I. M. (2010). High dimensional statistical inference and random matrices. In, International Congress of Mathematicians, I, 307-333, Eur. Math. Soc., Zurich. · Zbl 1120.62033
[20] Johnstone, I. M. (2007). High dimensional Bernstein – von Mises: simple examples, Inst. Math. Stat. Collect., 6, 87-98.
[21] Koltchinskii, V. and Lounici, K. (2016). Asymptotics and concentration bounds for bilinear forms of spectral projectors of sample covariance., Ann. Inst. H. PoincarĂ© Probab. Statist., 52, 4, 1976-2013. · Zbl 1353.62053
[22] Koltchinskii, V. and Lounici, K. (2017). Concentration inequalities and moment bounds for sample covariance operators., Bernoulli, 23, 1, 110-133. · Zbl 1366.60057
[23] Koltchinskii, V. and Lounici, K. (2017). New asymptotic results in Principal Component Analysis., Sankhya A., 79, 2, 254-297. · Zbl 06822893
[24] Koltchinskii, V. and Lounici, K. (2017). Normal approximation and concentration of spectral projectors of sample covariance., Ann. Statist., 45, 1, 121-157. · Zbl 1367.62175
[25] Le Cam, L. and Yang, G. L. (1990)., Asymptotics in statistics: some basic concepts. Springer, New York. · Zbl 0719.62003
[26] Marchenko, V. A. and Pastur, L. A. (1967). Distribution of eigenvalues in certain sets of random matrices., Mat. Sb. (N.S.), 72 (114), 4, 507-536. · Zbl 0152.16101
[27] Naumov, A., Spokoiny, V. and Ulyanov, V. (2017). Bootstrap confidence sets for spectral projectors of sample covariance., arXiv:1703.00871. · Zbl 1420.62073
[28] Reiss, M. and Wahl, M. (2018). Non-asymptotic upper bounds for the reconstruction error of PCA., arXiv:1609.03779.
[29] Rigollet, P. (2015)., Lecture notes on High-dimensional statistics. · Zbl 1372.00038
[30] Tropp, J. (2012). User-Friendly Tail Bounds for Sums of Random Matrices., Found. Comput. Math., 12, 4, 389-434. · Zbl 1259.60008
[31] Van der Vaart, A. W. (2000)., Asymptotic statistics. Cambridge series in statistical and probabilistic mathematics 3, Cambridge University Press, Cambridge.
[32] Vershynin, R. (2016). Introduction to the non-asymptotic analysis of random matrices. In, Compressed sensing, 210-268, Cambridge University Press, Cambridge.
[33] Wang, W. and Fan, J. (2017). Asymptotics of empirical eigenstructure for high dimensional spiked covariance, Ann. Statist., 45, 3, 1342-1374. · Zbl 1373.62299
[34] Zhou, H. H. and Gao, C. (2016). Bernstein-von Mises theorems for functionals of the covariance matrix., Electron. J. Statist., 10, 2, 1751-1806. · Zbl 1346.62059
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.