×

Sampling from log-concave distributions. (English) Zbl 0813.60060

The authors present an algorithm for efficient sampling from a probability distribution on \(\mathbb{R}^ n\) with log-concave density \(F\), where \(F\), or at least a sufficiently close approximation \(\overline F\) of \(F\), is known. The sampling is restricted to a compact convex set \(K\) such that the integral of \(F\) over \(K\) is sufficiently large. The algorithm is discretized by covering \(K\) with a set of adjacent hypercubes with small edge \(\delta\). Two methods of covering are given. The algorithm simulates a Markov chain \(X_ t\), \(t \in \mathbb{N}_ 0\), with finite state space \(C\), the set of centres of the covering hypercubes.
The transition probabilities are \(P(x,y) = (2n)^{-1} \min \{1,\overline F(y)/ \overline F(x)\}\) when \(x, y \in C\), \(\| y - x \| = \delta\) and \(P(x,x)\) has the remaining probability. The chain has stationary probabilities \(\pi(x) = cF(x)\) and is time reversible, i.e. \(\pi (x)P(x,y) = \pi (y)P(y,x)\). The authors estimate the convergence of the distribution of \(X_ t\) to the stationary distribution by deriving upper and lower bounds for the ‘chi-square’ distance \(\sum (1-P(X_ t = x)/ \pi (x))^ 2 \pi (x)\) and an upper bound for the \(l_ 1\)-distance. The boundary hypercubes, i.e. those in the covering of \(K\) that are only partially contained in \(K\), may complicate the algorithm. Therefore the authors derive an upper bound for the convergence to the stationary distribution in a general chain with finite state space where the influence of a fixed set with small stationary probability is excluded.
Reviewer: A.J.Stam (Winsum)

MSC:

60J10 Markov chains (discrete-time Markov processes on discrete state spaces)
65C05 Monte Carlo methods
68Q25 Analysis of algorithms and problem complexity
Full Text: DOI