On the convergence of the Markov chain simulation method. (English) Zbl 0860.60057

The following results on ergodicity of Markov chains with general state spaces have been proved. Suppose that the Markov chain \(\{X_n\}\) with state space \(({\mathcal X},{\mathcal B})\) and transition function \(P(x,C)\) has an invariant probability measure \(\pi\), and there is a set \(A\in{\mathcal B}\), a probability measure \(\rho\) with \(\rho(A)=1\), a constant \(\varepsilon>0\) and an integer \(n_0\geq1\) such that \(\pi\{x:P_x(T(A)<\infty)>0\}=1\), and \(P^{n_0}(x,\cdot)\geq\varepsilon\rho(\cdot)\) for each \(x\in A\), where \(T(A)=\inf\{n>0: X_n\in A\}\). Then \[ \lim_{n\to\infty} \sup_{C\in {\mathcal B}} \Biggl|{1\over n} \sum^n_{j=1} P^j(x,C)- \pi(C)\Biggr|=0\qquad \pi\text{-a.s.}. \] Let \(f(x)\) be a measurable function such that \(\int|f(y)|\pi(dy)<\infty\). Then \[ P_x\Biggl(\lim_{n\to\infty} {1\over n} \sum^n_{j=1} f(X_j)= \int f(y)\pi(dy)\Biggr)=1\qquad \pi\text{-a.s.}, \] and \[ \lim_{n\to\infty} {1\over n} \sum^n_{j=1} E_x(f(X_j))= \int f(y)\pi(dy)\qquad \pi\text{-a.s.}. \] In addition, suppose that \[ \text{g.c.d.}\{m:\text{ there is an }\varepsilon_m>0\text{ such that } P^m(x,\cdot)\geq \varepsilon_m\rho(\cdot)\text{ for each } x\in A\}=1. \] Then there is a set \(D\in {\mathcal B}\) such that \(\pi(D)=1\), and \[ \lim_{n\to\infty} \sup_{C\in{\mathcal B}} |P^n(x,C)-\pi(C)|=0\qquad\text{for each } x\in D. \] The authors argue that, compared with the earlier results on the topic, these results are more suitable to meet with the needs of the Markov chain simulation method, the assumptions made above are easier to check in reality.


60J20 Applications of Markov chains and discrete-time Markov processes on general state spaces (social mobility, learning theory, industrial processes, etc.)
60J05 Discrete-time Markov processes on general state spaces
65C99 Probabilistic methods, stochastic differential equations
60B10 Convergence of probability measures


Full Text: DOI


[1] AMIT, Y. 1991. On rates of convergence of stochastic relaxation for Gaussian and non-Gaussian distributions. J. Multivariate Anal. 38 82 99. Z. · Zbl 0735.60036 · doi:10.1016/0047-259X(91)90033-X
[2] ASMUSSEN, S. 1987. Applied Probability and Queues. Wiley, New York. Z. · Zbl 0624.60098
[3] ATHREy A, K. B., DOSS, H. and SETHURAMAN, J. 1992. A proof of convergence of the Markov chain simulation method. Technical Report 868, Dept. Statistics, Florida State Univ. Z.
[4] ATHREy A, K. B. and NEY, P. 1978. A new approach to the limit theory of recurrent Markov chains. Trans. Amer. Math. Soc. 245 493 501. Z. · Zbl 0397.60053 · doi:10.2307/1998882
[5] BESAG, J. 1974. Spatial interaction and the statistical analysis of lattice sy stems. J. Roy. Statist. Soc. Ser. B 36 192 236. Z. JSTOR: · Zbl 0327.60067
[6] CHAN, K. S. 1993. Asy mptotic behavior of the Gibbs sampler. J. Amer. Statist. Assoc. 88 320 326. Z. JSTOR: · Zbl 0779.62024 · doi:10.2307/2290727
[7] CHUNG, K. L. 1967. Markov Chains, 2nd ed. Springer, New York. Z. · Zbl 0146.38401
[8] DOOB, J. L. 1953. Stochastic Processes. Wiley, New York. Z. · Zbl 0053.26802
[9] DOSS, H. 1994. Bayesian nonparametric estimation for incomplete data via successive substitution sampling. Ann. Statist. 22 1763 1786. Z. · Zbl 0824.62027 · doi:10.1214/aos/1176325756
[10] DOSS, H. and NARASIMHAN, B. 1994. Bayesian Poisson regression using the Gibbs sampler: sensitivity analysis through dy namic graphics. Technical Report 895, Dept. Statistics, Florida State Univ. Z.
[11] FELLER, W. 1950. An Introduction to Probability Theory and Its Applications 1, 3rd ed. Wiley, New York. Z. · Zbl 0039.13201
[12] GEMAN, S. and GEMAN, D. 1984. Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Transactions on Pattern Analy sis and Machine Intelligence 6 721 741. Z. · Zbl 0573.62030 · doi:10.1109/TPAMI.1984.4767596
[13] GILKS, W. R. and WILD, P. 1992. Adaptive rejection sampling for Gibbs sampling. J. Roy. Statist. Soc. Ser. C 41 337 348. Z. · Zbl 0825.62407 · doi:10.2307/2347565
[14] GOODMAN, J. and SOKAL, A. D. 1989. Multigrid Monte Carlo method. Conceptual foundations. Phy s. Rev. D 40 2035 2071. Z.
[15] HASTINGS, W. K. 1970. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57 97 109. · Zbl 0219.65008 · doi:10.1093/biomet/57.1.97
[16] HOEL, P., PORT, S. and STONE, C. 1972. Introduction to Stochastic Processes. Houghton Mifflin, Boston. Z. · Zbl 0258.60003
[17] KARLIN, S. and TAy LOR, H. M. 1975. A First Course in Stochastic Processes. Academic Press, New York. Z.
[18] NUMMELIN, E. 1984. General Irreducible Markov Chains and Non-Negative Operators. Cambridge Univ. Press. Z. · Zbl 0551.60066
[19] OREY, S. 1971. Limit Theorems for Markov Chains Transition Probabilities. Van Nostrand, New York. Z. · Zbl 0295.60054
[20] REVUZ, D. 1975. Markov Chains. North-Holland, Amsterdam. Z. · Zbl 0332.60045
[21] SCHERVISH, M. and CARLIN, B. 1992. On the convergence of successive substitution sampling. Journal of Computational and Graphical Statistics 1 111 127. Z. JSTOR: · doi:10.2307/1390836
[22] STONE, C. 1965. On characteristic functions and renewal theory. Trans. Amer. Math. Soc. 120 327 342. Z. JSTOR: · Zbl 0133.40504 · doi:10.2307/1994024
[23] TANNER, M. A. and WONG, W. H. 1987. The calculation of posterior distributions by data Z. augmentation with discussion. J. Amer. Statist. Assoc. 82 528 550. Z. JSTOR: · Zbl 0619.62029 · doi:10.2307/2289457
[24] TIERNEY, L. 1990. Lisp-Stat. Wiley, New York. Z. Z. · Zbl 0850.62036
[25] TIERNEY, L. 1994. Markov chains for exploring posterior distributions with discussion. Ann. Statist. 22 1701 1762. Z. · Zbl 0829.62080 · doi:10.1214/aos/1176325750
[26] THOMAS, A., SPIEGELHALTER, D. and GILKS, W. 1992. BUGS: a program to perform Bayesian Z inference using Gibbs sampling. In Bayesian Statistics 4 J. Bernardo, J. Berger, A.. Dawid and A. F. M. Smith, eds. 837 842. Clarendon, Oxford.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.