×

zbMATH — the first resource for mathematics

Geometric ergodicity of the Bayesian Lasso. (English) Zbl 1349.60124
Summary: Consider the standard linear model \(\mathbf y=X\beta+\sigma\epsilon\), where the components of \(\epsilon\) are iid standard normal errors. [T. Park and G. Casella, J. Am. Stat. Assoc. 103, No. 482, 681–686 (2008; Zbl 1330.62292)] consider a Bayesian treatment of this model with a Laplace/Inverse-Gamma prior on \((\beta,\sigma)\). They introduce a Data Augmentation approach that can be used to explore the resulting intractable posterior density, and call it the Bayesian lasso algorithm. In this paper, the Markov chain underlying the Bayesian lasso algorithm is shown to be geometrically ergodic, for arbitrary values of the sample size \(n\) and the number of variables \(p\). This is important, as geometric ergodicity provides theoretical justification for the use of Markov chain CLT, which can then be used to obtain asymptotic standard errors for Markov chain based estimates of posterior quantities. M. Kyung et al. [Bayesian Anal. 5, No. 2, 369–411 (2010; Zbl 1330.62289)] provide a proof of geometric ergodicity for the restricted case \(n\geq p\), but as we explain in this paper, their proof is incorrect. Our approach is different and more direct, and enables us to establish geometric ergodicity for arbitrary \(n\) and \(p\).

MSC:
60J22 Computational methods in Markov chains
60F05 Central limit and other weak theorems
62J07 Ridge regression; shrinkage estimators (Lasso)
62F15 Bayesian inference
PDF BibTeX XML Cite
Full Text: DOI Euclid
References:
[1] Asmussen, S. and Glynn, P.W. (2011). A new proof of convergence of MCMC via the ergodic theorem, Statistics & Probability Letters 81 , 1482-1485. · Zbl 1232.65015
[2] Bednorz, W. and Latuszynski, K. (2007). A few remarks on “Fixed-Width Output Analysis for Markov Chain Monte Carlo” by Jones et al., JASA 102 , 1485-1486.
[3] Bae, K. and Mallick, B.K. (2004). Gene selection using a two-level hierarchical Bayesian model, Bioinformatics 20 , 3423-3430.
[4] Bhattacharya, A., Pati, D., Pillai, N.S., and Dunson, D.B. (2012). Bayesian shrinkage, arXiv · Zbl 1419.62050
[5] Carvalho, C.M., Polson, N.G., and Scott, J.G. (2009). Handling sparsity via the horseshoe, Journal of Machine Learning Research W & CP 5 , 73-80.
[6] Carvalho, C.M., Polson, N.G., and Scott, J.G. (2010). The horseshoe estimator for sparse signals, Biometrika 97 , 465-480. · Zbl 1406.62021
[7] Figueiredo, M.A.T. (2003). Adaptive sparseness for supervised learning, IEEE Transactions on Pattern Analysis and Machine Intelligence 25 , 1150-1159.
[8] Flegal, J.M., Haran, M., and Jones, G.M. (2008). Markov chain Monte Carlo: Can we trust the third significant figure? Statistical Science 23 , 250-260. · Zbl 1327.62017
[9] Flegal, J.M. and Jones, G.L. (2010). Batch means and spectral variance estimators in Markov chain Monte Carlo, Annals of Statistics 38 , 1034-1070. · Zbl 1184.62161
[10] Griffin, J.E. and Brown, P.J. (2010). Inference with normal-gamma prior distributions in regression problems, Bayesian Analysis 5 , 171-188. · Zbl 1330.62128
[11] Jones, G.L., Haran, M., Caffo, B.S., and Neath, R. (2006). Fixed-width output analysis for Markov chain Monte Carlo, Journal of the American Statistical Association 101 , 1537-1547. · Zbl 1171.62316
[12] Kyung, M., Gill, J., Ghosh, M., and Casella, G. (2010). Penalized regression, standard errors and Bayesian lassos, Bayesian Analysis 5 , 369-412. · Zbl 1330.62289
[13] Meyn, S.P. and Tweedie, R.L. (1993). Markov Chains and Stochastic Stability , Springer-Verlag, London. · Zbl 0925.60001
[14] Park, T. and Casella, G. (2008). The Bayesian lasso, Journal of the American Statistical Association 103 , 681-686. · Zbl 1330.62292
[15] Polson, N.G. and Scott, J.G. (2010). Shrink globally, act locally: Sparse Bayesian regularization and prediction, Bayesian Statistics 9 , J.M. Bernardo, M.J. Bayarri, J.O. Berger, A.P. Dawid, D. Heckerman, A.F.M. Smith, and M. West, eds., Oxford University Press, New York.
[16] Roberts, G.O. and Rosenthal, J.S. (1998). Markov chain Monte Carlo: Some practical implications of theoretical results (with discussion), Canadian Journal of Statistics 26 , 5-31. · Zbl 0920.62105
[17] Roberts, G.O. and Rosenthal, J.S. (2004). General state space Markov chains and MCMC algorithms, Probability Surveys , 20-71. · Zbl 1189.60131
[18] Rosenthal, J.S. (1995). Minorization conditions and convergence rates for Markov chain Monte Carlo, JASA 90 , 558-566. · Zbl 0824.60077
[19] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso, Journal of the Royal Statistical Society B 58 , 267-288. · Zbl 0850.62538
[20] Yuan, M., and Lin, Y. (2005). Efficient empirical Bayes variable selection and esimation in linear models, Journal of the American Statistical Association 100 , 1215-1225. · Zbl 1117.62453
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.