×

Analysis of a nonreversible Markov chain sampler. (English) Zbl 1083.60516

Summary: We analyze the convergence to stationarity of a simple nonreversible Markov chain that serves as a model for several nonreversible Markov chain sampling methods that are used in practice. Our theoretical and numerical results show that nonreversibility can indeed lead to improvements over the diffusive behavior of simple Markov chain sampling schemes. The analysis uses both probabilistic techniques and an explicit diagonalization.

MSC:

60J10 Markov chains (discrete-time Markov processes on discrete state spaces)
65C05 Monte Carlo methods
PDF BibTeX XML Cite
Full Text: DOI Euclid

References:

[1] Adler, S. L. (1981). Over-relaxation method for the Monte Carlo evaluation of the partition function for multiquadratic actions. Phys. Rev. D 23 2901-2904.
[2] Bassiri, F. (1997). Random walks on finite groups of multiplicity two. Ph.D. dissertation, Harvard Univ.
[3] Binder, K. (1979). Monte Carlo Methods in Statistical Physics. Springer, Berlin.
[4] Conway, J., Sloane, N. and Wilks, A. (1989). Gray codes for reflection groups. Graphs Combin. 5 315-325. · Zbl 0684.20036
[5] Chen, F., Lovász, L. and Pak, I. (1999). Lifting Markov chains to speed up mixing. In Proceedings of the 31st Annual ACM Symposium on Theory of Computing 275-281. ACM Press, New York. · Zbl 1345.60075
[6] Chung, F., Graham, R. and Yau, S. T. (1996). On sampling with Markov chains. Random Structures Algorithms 9 55-77. · Zbl 0861.60079
[7] Diaconis, P. (1988). Group Representations in Probability and Statistics. IMS, Hayward, CA. · Zbl 0695.60012
[8] Diaconis, P. and Efron, B. (1987). Probabilistic-geometric theorems arising from the analysis of contingency tables. In Contributions to the Theory and Application of Statistics: A Volume in Honor of Herbert Solomon (A. E. Gelfand ed.) 103-125. Academic Press, Boston. · Zbl 0709.60566
[9] Diaconis, P. and Fill, J. (1990). Strong stationary times via a new form of duality. Ann. Probab. 18 1483-1522. · Zbl 0723.60083
[10] Diaconis, P. and Gangolli, A. (1996). Rectangular arrays with fixed margins. In Finite Markov Chain Renaissance 15-42. Springer, New York. · Zbl 0839.05005
[11] Diaconis, P. and Holmes, S. (1994). Gray codes for randomization procedures. Statist. Comput. 4 207-302.
[12] Diaconis, P. and Saloff-Coste, L. (1994). Moderate growth and random walk on finite groups. Geom. Funct. Anal. 4 1-36. · Zbl 0795.60005
[13] Diaconis, P. and Saloff-Coste, L. (1998). What do we know about the Metropolis algorithm? J. Comput. System Sci. 57 20-36. · Zbl 0920.68054
[14] Diaconis, P. and Sturmfels, B. (1998). Algebraic algorithms for sampling from conditional distributions. Ann. Statist. 26 363-397. · Zbl 0952.62088
[15] Duane, S., Kennedy, A. D., Pendleton, B. J. and Roweth, D. (1987). Hybrid Monte Carlo. Phys. Lett. B 195 216-222.
[16] Gelfand, A. E. and Smith, A. F. M. (1990). Sampling-based approaches to calculating marginal densities. J. Amer. Statist. Assoc. 85 398-409. JSTOR: · Zbl 0702.62020
[17] Geman, S. and Geman, D. (1984). Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Machine Intelligence 6 721-741. · Zbl 0573.62030
[18] Gustafson, P. (1997). Large hierarchical Bayesian analysis of multivariate survival data. Biometrics 53 230-242. · Zbl 0902.62127
[19] Gustafson, P. (1998). A guided walk Metropolis algorithm. Statist. Comput. 8 357-364.
[20] Hildebrand, M. (1997). Rates of convergence for a non-reversible Markov chain sampler. Preprint. Available from http://math.albany.edu:8000/ martinhi/. URL:
[21] Horowitz, A. M. (1991). A generalized guided Monte Carlo algorithm. Phys. Lett. B 268 247-252.
[22] Lindvall, T. (1992). Lectures on the Coupling Method. Wiley, New York. · Zbl 0850.60019
[23] Kennedy, A. D. (1990). The theory of hybrid stochastic algorithms. In Probabilistic Methods in Quantum Field Theory and Quantum Gravity (P. H. Damgaard, H. H üffel and A. Rosenblum, eds.). Plenum, New York. · Zbl 0732.65005
[24] Liu, J. (1996). Metropolized independent sampling with comparisons to rejection sampling and importance sampling. Statist. Comput. 6 113-119. · Zbl 0880.62038
[25] Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H. and Teller, E.
[26] . Equation of state calculations by fast computing machines. J. Chem. Phys. 21 1087-1092. · Zbl 0960.93506
[27] Mira, A. and Geyer, C. J. (1999). Ordering Monte Carlo Markov chains. Technical Report 632, School of Statistics, Univ. Minnesota.
[28] Neal, R. M. (1993). Probabilistic inference using Markov chain Monte Carlo methods. Technical Report CRG-TR-93-1, Dept. Computer Science, Univ. Toronto. Available from http://www.cs.utoronto.ca/ radford/. URL:
[29] Neal, R. M. (1996). Bayesian Learning for Neural Networks. Lecture Notes in Statist. 118 Springer, New York. · Zbl 0888.62021
[30] Neal, R. M. (1998). Suppressing random walks in Markov chain Monte Carlo using ordered overrelaxation. In Learning in Graphical Models (M. I. Jordan, ed.) 205-225. Kluwer, Dordrecht. · Zbl 0920.60055
[31] Roberts, G. O. and Sahu, S. K. (1997). Updating schemes, correlation structure, blocking and parameterisation for the Gibbs sampler. J. Roy. Statist. Soc. B 59 291-317. JSTOR: · Zbl 0886.62083
[32] Sinclair, A. (1993). Algorithms for Random Generation and Counting: A Markov Chain Approach. Birkhäuser, Boston. · Zbl 0780.68096
[33] Smith, A. F. M. and Roberts, G. O. (1993). Bayesian computation via the Gibbs sampler and related Markov chain Monte Carlo methods. J. Roy. Statist. Soc. 55 3-23. JSTOR: · Zbl 0779.62030
[34] Tierney, L. (1994). Markov Chains for exploring posterior distributions. Ann. Statist. 22 1701-1762. · Zbl 0829.62080
[35] Toussaint, D. (1989). Introduction to algorithms for Monte Carlo simulations and their application to QCD. Comput. Phys. Comm. 56 69-92.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.