Tierney, Luke Markov chains for exploring posterior distributions. (With discussion). (English) Zbl 0829.62080 Ann. Stat. 22, No. 4, 1701-1762 (1994). Summary: Several Markov chain methods are available for sampling from a posterior distribution. Two important examples are the Gibbs sampler and the Metropolis algorithm. In addition, several strategies are available for constructing hybrid algorithms. This paper outlines some of the basic methods and strategies and discusses some related theoretical and practical issues. On the theoretical side, results from the theory of general state space Markov chains can be used to obtain convergence rates, laws of large numbers and central limit theorems for estimates obtained from Markov chain methods. These theoretical results can be used to guide the construction of more efficient algorithms. For the practical use of Markov chain methods, standard simulation methodology provides several variance reduction techniques and also gives guidance on the choice of sample size and allocation. Cited in 4 ReviewsCited in 623 Documents MSC: 62M05 Markov processes: estimation; hidden Markov models 60J27 Continuous-time Markov processes on discrete state spaces 65C05 Monte Carlo methods 60J05 Discrete-time Markov processes on general state spaces Keywords:Monte Carlo; Metropolis-Hastings algorithm; Gibbs sampler; Metropolis algorithm; hybrid algorithms; general state space Markov chains; convergence rates; laws of large numbers; central limit theorems; simulation methodology; variance reduction; choice of sample size PDF BibTeX XML Cite \textit{L. Tierney}, Ann. Stat. 22, No. 4, 1701--1762 (1994; Zbl 0829.62080) Full Text: DOI OpenURL