×

A quasi-ergodic theorem for evanescent processes. (English) Zbl 0996.60085

Motivated by applications to Markov chain Monte Carlo methods, the authors prove a conditioned version of the ergodic theorem for Markov processes. Let \((X,\mathbf P_{x})\) be a strong Markov process with right continuous trajectories with left limits, evolving on some state space \(E\). Let \(\tau \) be its lifetime. A nonnegative measurable function \(f\) on \(E\) is called \(\lambda \)-invariant (for some \(\lambda \in \mathbb R\)) if \(\int _{E}\mathbf 1_{\{\tau >t\}}f(X_{t}) d\mathbf P_{x} = e^{\lambda t}f(x)\) for every \(t>0\). A \(\lambda \)-invariant measure is defined analogously. If \(f\) is a \(\lambda \)-invariant function, one may consider \(X\) as a process with a new state space \(E_{f}=\{0<f<\infty \}\) and under a new measure \(\mathbf Q\) defined on the path space by \[ f(x)\int \mathbf 1_{\{\tau >t\}} F d\mathbf Q_{x} = \int \mathbf 1_{\{\tau >t\}} e^{-\lambda t}f(X_{t})F d\mathbf P_{x} \] for all \(t>0\), \(x\in E_{f}\) and all nonnegative \({\mathcal F}_{t}\)-measurable functions \(F\). We say that \(X\) is positive \(\lambda \)-recurrent provided for some \(\lambda \leq 0\) there exists a \(\lambda \)-invariant function \(f\) such that \((X,\mathbf Q_{x})\) is positive Harris recurrent. Suppose that \(X\) is irreducible positive \(\lambda \)-recurrent, with associated \(\lambda \)-invariant function \(f\) and measure \(\mu \). It is shown that if \(\mu (E_{f})<\infty\), then \[ \lim _{t\to \infty }\mathbf P_{x}\left [\frac 1{t}\int ^{t}_{0} g(X_{s}) ds\biggm |\tau >t\right ] = \frac {\int gf d\mu }{\int f d\mu } \] holds for every bounded measurable function \(g\) and each \(x\in E_f\).

MSC:

60J25 Continuous-time Markov processes on general state spaces
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Asmussen, S., Hering, H., 1983. Branching Processes. Birkhäuser, Boston.; Asmussen, S., Hering, H., 1983. Branching Processes. Birkhäuser, Boston. · Zbl 0516.60095
[2] Breyer, L.A., 1997. Quasistationarity and conditioned Markov processes. Ph.D. Thesis, The University of Queensland.; Breyer, L.A., 1997. Quasistationarity and conditioned Markov processes. Ph.D. Thesis, The University of Queensland.
[3] Getoor, R.K., 1980. Transience and recurrence of Markov processes. Seminar on Probability, vol. XIV, Paris, 1978/1979. Lecture Notes in Mathematics, vol. 784, Springer, Berlin, pp. 397-409.; Getoor, R.K., 1980. Transience and recurrence of Markov processes. Seminar on Probability, vol. XIV, Paris, 1978/1979. Lecture Notes in Mathematics, vol. 784, Springer, Berlin, pp. 397-409.
[4] Jacka, S. D.; Roberts, G. O., Weak convergence of conditioned processes on a countable state space., J. Appl. Probab., 32, 902-916 (1995) · Zbl 0839.60069
[5] Jeulin, T., Compactification de Martin d’un processus droit., Z. Wahrscheinlichkeitstheorie, 42, 229-260 (1977) · Zbl 0362.60081
[6] Meyn, S.P., Tweedie, R.L., 1993. Markov Chains and Stochastic Stability. Springer, Berlin.; Meyn, S.P., Tweedie, R.L., 1993. Markov Chains and Stochastic Stability. Springer, Berlin. · Zbl 0925.60001
[7] Meyer, P.A., 1968. Processus de Markov: la frontière de Martin, Lecture Notes in Mathematics, vol. 77. Springer, New York.; Meyer, P.A., 1968. Processus de Markov: la frontière de Martin, Lecture Notes in Mathematics, vol. 77. Springer, New York.
[8] Nair, M.G., Pollett, P.K., 1993. On the relationship between \(μ\); Nair, M.G., Pollett, P.K., 1993. On the relationship between \(μ\) · Zbl 0774.60070
[9] Revuz, D., 1979. A survey of limit theorems for Markov chains and processes on general state spaces. Proceedings of the 42nd session of the International Statistical Institute, vol. 2, Manila, 1979. Bull. Inst. Int. Statist. 48(2), 203-210.; Revuz, D., 1979. A survey of limit theorems for Markov chains and processes on general state spaces. Proceedings of the 42nd session of the International Statistical Institute, vol. 2, Manila, 1979. Bull. Inst. Int. Statist. 48(2), 203-210. · Zbl 0521.60079
[10] Smith, A.F.M., Roberts, G.O., 1993. Bayesian computation via the Gibbs sampler and related Markov chain Monte Carlo methods. J. Roy. Statist. Soc. Ser. B 55(1), 3-23.; Smith, A.F.M., Roberts, G.O., 1993. Bayesian computation via the Gibbs sampler and related Markov chain Monte Carlo methods. J. Roy. Statist. Soc. Ser. B 55(1), 3-23. · Zbl 0779.62030
[11] Tuominen, P.; Tweedie, R. L., Exponential decay and ergodicity of general Markov processes and their discrete skeletons., Adv. Appl. Probab., 11, 784-803 (1979) · Zbl 0421.60065
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.