##
**Algorithms for random generation and counting: a Markov chain approach.**
*(English)*
Zbl 0780.68096

Progress in Theoretical Computer Science. Boston, MA: Birkhäuser. viii, 146 p. (1993).

This monograph studies the counting and random generation of finite sets of combinatorial structures. A single algorithmic paradigm is used: Simulate a discrete-time Markov chain \(X=\{X_ t\}^ \infty_{t=0}\) whose states are combinatorial structures. Preferably \(X\) should be set up to be ergodic and time-reversible any conveniently chosen state and to simulate \(X\) until a time \(t\) when \(X_ t\) is uniformly within a prescribed number \(\epsilon >0\) of the stationary distribution. Moreover, the class of Markov chains associated with all instances \(x\) of a particular combinatorial problem should be rapidly mixing, which essentially means that the time \(t\) above can be calculated as a polynomially bounded function of \(\epsilon\) and of the “complexity” of \(x\).

The key problem is to ensure rapid mixing, which the author relates to a more accessible feature of the Markov chain called its conductance. This defined to be the minimum of the conditional probabilities \(P(X_ 1\in S\backslash/E \mid X_ 0\in E)\), where \(X_ 0\) is taken to have the stationary distribution of the chain, where \(S\) denotes the state space, and where \(E\subseteq S\) is such that \(P(X_ 0\in E)\leq 1/2\). The use of conductance enables the author to obtain, for example, an efficient (i.e. polynomial-time) algorithm to approximate the permanent of a sufficiently “dense” matrix with \(0-1\) entries.

While tighter time bounds, and hence greater efficiency, can be obtained for some combinatorial problems using other methods such as coupling, stopping times and group representation theory, the author’s Markov chain method also applies to problems (e.g. the calculation of permanents) which are not amenable to analysis by any of the established methods. Moreover, if one is content with approximate counting and almost uniform generation, the method can treat an even wider class of problems in polynomial time.

The key problem is to ensure rapid mixing, which the author relates to a more accessible feature of the Markov chain called its conductance. This defined to be the minimum of the conditional probabilities \(P(X_ 1\in S\backslash/E \mid X_ 0\in E)\), where \(X_ 0\) is taken to have the stationary distribution of the chain, where \(S\) denotes the state space, and where \(E\subseteq S\) is such that \(P(X_ 0\in E)\leq 1/2\). The use of conductance enables the author to obtain, for example, an efficient (i.e. polynomial-time) algorithm to approximate the permanent of a sufficiently “dense” matrix with \(0-1\) entries.

While tighter time bounds, and hence greater efficiency, can be obtained for some combinatorial problems using other methods such as coupling, stopping times and group representation theory, the author’s Markov chain method also applies to problems (e.g. the calculation of permanents) which are not amenable to analysis by any of the established methods. Moreover, if one is content with approximate counting and almost uniform generation, the method can treat an even wider class of problems in polynomial time.

Reviewer: B.K.Horkelheimer (Clayton)

### MSC:

68R05 | Combinatorics in computer science |

68U20 | Simulation (MSC2010) |

60J10 | Markov chains (discrete-time Markov processes on discrete state spaces) |

60J20 | Applications of Markov chains and discrete-time Markov processes on general state spaces (social mobility, learning theory, industrial processes, etc.) |

15A15 | Determinants, permanents, traces, other special matrix functions |

65C99 | Probabilistic methods, stochastic differential equations |

65F20 | Numerical solutions to overdetermined systems, pseudoinverses |

65F50 | Computational methods for sparse matrices |