×

zbMATH — the first resource for mathematics

Can the adaptive Metropolis algorithm collapse without the covariance lower bound? (English) Zbl 1226.65007
Summary: The adaptive Metropolis (AM) algorithm is based on the symmetric random-walk Metropolis algorithm. The proposal distribution has the following time-dependent covariance matrix at step \(n+1\)
\[ S_{n} = \text{Cov}(X_{1},\dots ,X_{n}) + \varepsilon I, \]
that is, the sample covariance matrix of the history of the chain plus a (small) constant \(\varepsilon >0\) multiple of the identity matrix \(I\). The lower bound on the eigenvalues of \(S_{n}\) induced by the factor \(\varepsilon I\) is theoretically convenient, but practically cumbersome, as a good value for the parameter \(\varepsilon \) may not always be easy to choose.
This article considers variants of the AM algorithm that do not explicitly bound the eigenvalues of \(S_{n}\) away from zero. The behaviour of \(S_{n}\) is studied in detail, indicating that the eigenvalues of \(S_{n}\) do not tend to collapse to zero in general. In dimension one, it is shown that \(S_{n}\) is bounded away from zero if the logarithmic target density is uniformly continuous. For a modification of the AM algorithm including an additional fixed component in the proposal distribution, the eigenvalues of \(S_{n}\) are shown to stay away from zero with a practically non-restrictive condition. This result implies a strong law of large numbers for super-exponentially decaying target distributions with regular contours.

MSC:
65C40 Numerical analysis or methods applied to Markov chains
60J22 Computational methods in Markov chains
60G50 Sums of independent random variables; random walks
PDF BibTeX XML Cite
Full Text: DOI EMIS arXiv