## Markov processes for stochastic modeling.(English)Zbl 0866.60056

London: Chapman & Hall. vii, 341 p. (1997).
This is a monograph about homogeneous Markov chains, i.e. discrete or continuous time Markov processes with countable state space and stationary increments. It covers some very recent developments in the area, but the only prerequisite demanded is elementary probability theory. The appendices include some technical background. The book contains exercises and many examples illustrating the theoretical developments, a long modern bibliography, and subject, author and notation indexes. Long proofs are omitted, but the reader is always referred to appropriate references. There are also many references to further developments of each subject treated. The title is somewhat misleading, in that emphasis is on mathematical treatment rather than on the construction of mathematical models from real world problems.
The book is evenly divided into the discrete and the continuous time settings. The first part begins with the standard theory of classification of states, and the discussion of stationary and limiting distributions. The following advanced topics are developed: a) A Markov chain is reversible if $$P(X_{m+n} =j\mid X_m=i) =P(X_m=j \mid X_{m+n} =i)$$. Some criteria for time reversibility of ergodic Markov chains are given. b) The transient behaviour of a finite ergodic Markov chain (distribution of each random variable $$X_n)$$ is in principle easy to compute from the initial distribution and the transition matrix. But, for large $$n$$, this computation requires much more operations than the computation of the stationary law. Therefore, it is interesting to study the approximation of the transient behaviour by the stationary behaviour. The exact computation of the rate of convergence is long for large state spaces, but some upper bounds can be given. c) The study of the transition probability from one state to another when some states are prohibited in between (taboo states) gives rise to substochastic transition matrices. A Markov chain governed by a strictly substochastic transition matrix is called a lossy Markov chain. d) A Markov chain $$\{X_n\}$$ is said to be increasing if $$X_{n+1} \succ X_n$$, where $$\succ$$ denotes some stochastic ordering. The relation $$X_n \succ Y_n$$ between two Markov chains is also of interest. Several concepts of stochastic ordering between countably valued random variables can be considered. The book states and exemplifies them in detail. The chapter on this subject is difficult to read due to possibly unavoidable notational complexity.
The second part of the book is devoted to homogeneous continuous-time Markov chains with strongly continuous transition matrices. Infinitesimal generators, and forward and backward Kolmogorov equations are introduced. The simpler finite state case is studied first. Some of the advanced topics are the following: a) Under some assumptions, always satisfied in the finite state case, one can associate to a continuous-time Markov chain $$\{X(t)\}$$ a certain family of discrete-time Markov chains. This family is called uniformized Markov chain of $$\{X(t)\}$$. In the finite state situation, the author studies time reversibility, the rate of convergence to stationarity, and numerical methods for the computation of the transition probabilities of $$X(t)$$, by means of its uniformized Markov chain. b) Monotonicity is also studied under the hypothesis of uniformizability. c) If the time spent in each state by a Markov chain $$\{X(t)\}$$ is strictly positive, then it must be exponentially distributed. Allowing holding times with a different distribution, which can be state-dependent, one obtains a process called a semi-Markov chain. d) In a last chapter, the theory is specialized to birth and death processes: $$\mathbb{N}$$-valued Markov chains whose jumps have size one.

### MSC:

 60J10 Markov chains (discrete-time Markov processes on discrete state spaces) 60-02 Research exposition (monographs, survey articles) pertaining to probability theory 60J27 Continuous-time Markov processes on discrete state spaces