×

Selecting massive variables using an iterated conditional modes/medians algorithm. (English) Zbl 1327.62409

Summary: Empirical Bayes methods are designed in selecting massive variables, which may be inter-connected following certain hierarchical structures, because of three attributes: taking prior information on model parameters, allowing data-driven hyperparameter values, and free of tuning parameters. We propose an iterated conditional modes/medians (ICM/M) algorithm to implement empirical Bayes selection of massive variables, while incorporating sparsity or more complicated a priori information. The iterative conditional modes are employed to obtain data-driven estimates of hyperparameters, and the iterative conditional medians are used to estimate the model coefficients and therefore enable the selection of massive variables. The ICM/M algorithm is computationally fast, and can easily extend the empirical Bayes thresholding, which is adaptive to parameter sparsity, to complex data. Empirical studies suggest competitive performance of the proposed method, even in the simple case of selecting massive regression predictors.

MSC:

62J05 Linear regression; mixed models
62C12 Empirical decision procedures; empirical Bayes procedures
62F07 Statistical ranking and selection procedures
PDF BibTeX XML Cite
Full Text: DOI Euclid

References:

[1] Barbieri, M. M. and Berger, J. O. (2004). Optimal predictive model selection., The Annals of Statistics , 32:870-897. · Zbl 1092.62033
[2] Benjamini, Y. and Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing., Journal of the Royal Statistical Society, Series B , 57:289-300. · Zbl 0809.62014
[3] Besag, J. (1975). Statistical analysis of non-lattice data., Journal of the Royal Statistical Society Series D (The Statistician) , 24:179-195.
[4] Besag, J. (1986). On the statistical analysis of dirty pictures., Journal of the Royal Statistical Society Series B , 48:259-302. · Zbl 0609.62150
[5] Bottolo, L. and Richardson, S. (2010). Evolutionary stochastic search for bayesian model exploration., Bayesian Analysis , 5:583-618. · Zbl 1330.90042
[6] Breheny, P. and Huang, J. (2011). Cooridinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection., The Annals of Applied Statistics , 5:232-253. · Zbl 1220.62095
[7] Brenner, V., Lindauera, K., Parkara, A., Fordhama, J., Hayesa, I., Stowa, M., Gamaa, R., Pollocka, K., and Jupp, R. (2001). Analysis of cellular adhesion by microarray expression profiling., Journal of Immunological Methods , 250:15-28.
[8] Carlin, B. P. and Chib, S. (1995). Bayesian model choice via markov chain Monte Carlo methods., Journal of the Royal Statistical Society Series B , 57:473-484. · Zbl 0827.62027
[9] Cupples, L. A., Arruda, H. T., Benjamin, E. J., and et al. (2007). The framingham heart study 100k snp genome-wide association study resource: Overview of 17 phenotype working group reports., BMC Medical Genetics , 8(Suppl 1):S1.
[10] Daubechies, I., Defrise, M., and Mol, C. D. (2004). An iterative thresholding algorithm for linear inverse problems with a sparsity constraint., Commications on Pure and Applied Mathematics , 57:1413-1457. · Zbl 1077.65055
[11] Donoho, D. L. and Johnstone, I. M. (1994). Ideal spatial adaptation by wavelet shrinkage., Biometrika , 81:425-455. · Zbl 0815.62019
[12] Dudoit, S., Shaffer, J. P., and Boldrick, J. C. (2003). Multiple hypothesis testing in microarray experiments., Statistical Science , 18:71-103. · Zbl 1048.62099
[13] Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties., Journal of the American Statistical Association , 96:1348-1360. · Zbl 1073.62547
[14] Fu, W. J. (1998). Penalized regressions: The bridge versus the lasso., Journal of Computational and Graphical Statistics , 7:397-416.
[15] George, E. I. and McCulloch, R. E. (1993). Variable selection via gibbs sampling., Journal of American Statistical Association , 85:398-409.
[16] Geyer, C. J. (1991). Markov chain Monte Carlo maximum likelihood. In, Computing Science and Statistics, Proceedings of the 23rd Symposium on the Interface , pages 156-163.
[17] Geyer, C. J. and Thompson, E. A. (1992). Constrained monte carlo maximum likelihood for dependent data., Journal of the Royal Statistical Society Series B , 54:657-699.
[18] Guyon, X. and Kunsch, H. R. (1992). Asymptotic comparison of estimators in the ising model. In Barone, P., Frigessi, A., and Piccioni, M., editors, Stochastic Models, Statistical Methods, and Algorithms in Image Analysis , pages 177-198. Springer, New York. · Zbl 0801.62086
[19] Hans, C., Dobra, A., and West, M. (2007). Shotgun stochastic search for “large p” regression., Journal of the American Statistical Association , 102:507-516. · Zbl 1134.62398
[20] Ishwaran, H. and Rao, J. S. (2005). Spike and slab variable selection: Frequentist and Bayesian strategies., The Annals of Statistics , 33:730-773. · Zbl 1068.62079
[21] Jeffreys, H. (1946). An invariant form for the prior probability in estimation problems., Proceedings of the Royal Society of Landon Series A , 196:453-461. · Zbl 0063.03050
[22] Johnstone, I. M. and Silverman, B. W. (2004). Needles and straw in haystacks: Empirical bayes estimates of possibly sparse sequence., The Annals of Statistics , 32:1594-1649. · Zbl 1047.62008
[23] Johnstone, I. M. and Silverman, B. W. (2005). Ebayesthresh: R programs for empirical bayes thresholding., Journal of Statistical Software , 12:1-38. · Zbl 0815.62019
[24] Li, C. and Li, H. (2010). Variable selection and regression analysis for graph-structured covariates with an application to genomics., The Annals of Applied Statistics , 4:1498-1516. · Zbl 1202.62157
[25] Li, F. and Zhang, N. R. (2010). Bayesian variable selection in structured high-dimensional covariate spaces with application in genomics., Journal of the American Statistical Association , 105:1202-1214. · Zbl 1390.62027
[26] Liang, G. and Yu, B. (2003). Maximum pseudo likelihood estimation in network tomography., IEEE Transactions on Signal Processing , 51:2043- 2053.
[27] Mase, S. (2000). Marked gibbs processes and asymptotic normality of maximum pseudo-likelihood estimators., Mathematische Nachrichten , 209:151-169. · Zbl 1123.62315
[28] Meinshausen, N., Meier, L., and Buehlmann, P. (2009). P-values for high-dimensional regression., Journal of the American Statistical Association , 104:1671-1681. · Zbl 1205.62089
[29] Mitchell, T. J. and Beauchamp, J. J. (1988). Bayesian variable selection in linear regression., Journal of the American Statistical Association , 83:1023-1036. · Zbl 0673.62051
[30] Nawy, T. (2012). Rare variants and the power of association., Nature Methods , 9:324.
[31] Newton, M. A., Noueiry, A., Sarkar, D., and Ahlquist, P. (2004). Detecting differential gene expression with a semiparametric hierarchical mixture method., Biostatistics , 5:155-176. · Zbl 1096.62124
[32] Onsager, L. (1943). Crystal statistics. i. A two-dimensional model with an order-disorder transition., Physical Review , 65:117-149. · Zbl 0060.46001
[33] Pan, W., Xie, B., and Shen, X. (2010). Incorporating predictor network in penalized regression with application to microarray data., Biometrics , 66:474-484. · Zbl 1192.62235
[34] Rockova, V. and George, E. I. (2014). Incorporating predictor network in penalized regression with application to microarray data., Journal of the American Statistical Association , 109:828-846. · Zbl 1367.62049
[35] Stingo, F. C., Chen, Y. A., Tadesse, M. G., and Vannucci, M. (2011). Incorporating biological information into linear models: A Bayesian approach to the selection of pathways and genes., The Annals of Applied Statistics , 5:1978-2002. · Zbl 1228.62150
[36] Sun, T. and Zhang, C.-H. (2012). Scaled sparse linear regression., Biometrika , 99:879-898. · Zbl 1452.62515
[37] Syed, V. and Hecht, N. B. (1997). Up-regulation and down-regulation of genes expressed in cocultures of rat sertoli cells and germ cells., Molecular Reproduction and Development , 47:380-389.
[38] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso., Journal of Royal Statistical Society Series B , 58:267-288. · Zbl 0850.62538
[39] Varin, C., Reid, N., and Firth, D. (2011). An overview of composite likelihood methods., Statistica Sinica , 21:5-42. · Zbl 05849508
[40] Wasserman, L. and Roeder, K. (2009). High-dimensional variable selection., Annals of Statistics , 37:2178-2201. · Zbl 1173.62054
[41] Wu, T. T. and Lange, K. (2008). Coordinate descent algorithms for lasso penalized regression., The Annals of Applied Statistics , 2:224-244. · Zbl 1137.62045
[42] Yuan, M. and Lin, Y. (2006). Model selection and estimation in regression with grouped variables., Journal of Royal Statistical Society Series B , 68:49-67. · Zbl 1141.62030
[43] Zhang, M., Zhang, D., and Wells, M. T. (2010). Generalized thresholding estimators for high-dimensional location parameters., Statistica Sinica , 20:911-926. · Zbl 1187.62060
[44] Zhou, X. and Schmidler, S. C. (2009). Bayesian parameter estimation in ising and potts models: A comparative study with applications to protein modeling. Technical report, Duke, University.
[45] Zou, H. (2006). The adaptive lasso and its oracle properties., Journal of the American Statistical Association , 101:1418-1429. · Zbl 1171.62326
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.