×

zbMATH — the first resource for mathematics

Quantitative convergence rates of Markov chains: A simple account. (English) Zbl 1013.60053
Let the transition kernel for a Markov chain be defined on a state space \(X\). Suppose two different copies of the chain, \(\{X_n\}\) and \(\{ X_n'\}\), starting from two different initial distributions \(\alpha (X_0)\) and \(\alpha (X_0')\). The total variation distance between the two copies after \(k\) steps of the chain is defined by \[ \bigl\|\alpha(x_k)-\alpha(x_k') \bigr \|_{\text{TV}}= \sup_{A\subseteq X}\bigl |P(x_k\in A)-P(x_k'\in A)\bigr |. \] Such quantitative bounds on convergence rates of Markov chains have been motivated by interest in Markov chain (MCMC) algorithms including the Gibbs sampler and the Metropolis-Hastings algorithm.
The author presents such a quantitative bound result on the total variation distance after \(k\) iterations between two Markov chains with different initial distributions but identical transition probabilities. The result is a simplified and improved version of the result of J. S. Rosenthal (1995) which also takes into account the \(\varepsilon \)-improvement of G. O. Roberts and R. L. Tweedie [Stochastic Processes Appl. 80, No. 2, 211-229 (1999; Zbl 0961.60066)] and which follows as a special case of the complicated time-inhomogeneous results of Douc et al. (2002).

MSC:
60J05 Discrete-time Markov processes on general state spaces
62M05 Markov processes: estimation; hidden Markov models
PDF BibTeX XML Cite
Full Text: DOI EMIS EuDML