Baumann, Hendrik; Sandmann, Werner Bounded truncation error for long-run averages in infinite Markov chains. (English) Zbl 1326.60107 J. Appl. Probab. 52, No. 3, 609-621 (2015). Summary: We consider long-run averages of additive functionals on infinite discrete-state Markov chains, either continuous or discrete in time. Special cases include long-run average costs or rewards, stationary moments of the components of ergodic multi-dimensional Markov chains, queueing network performance measures, and many others. By exploiting Foster-Lyapunov-type criteria involving drift conditions for the finiteness of long-run averages we determine suitable finite subsets of the state space such that the truncation error is bounded. Illustrative examples demonstrate the application of this method. Cited in 4 Documents MSC: 60J10 Markov chains (discrete-time Markov processes on discrete state spaces) 60J27 Continuous-time Markov processes on discrete state spaces 60J55 Local time and additive functionals 60J22 Computational methods in Markov chains 60J28 Applications of continuous-time Markov processes on discrete state spaces Keywords:infinite Markov chains; additive functionals; long-run average; state space truncation; bounded truncation error; Foster-Lyapunov-type criterion; drift condition PDF BibTeX XML Cite \textit{H. Baumann} and \textit{W. Sandmann}, J. Appl. Probab. 52, No. 3, 609--621 (2015; Zbl 1326.60107) Full Text: DOI Euclid OpenURL References: [1] Anderson, W. J. (1991). Continuous-Time Markov Chains . Springer, New York. · Zbl 0731.60067 [2] Asmussen, S. (2003). Applied Probability and Queues , 2nd edn. Springer, New York. · Zbl 1029.60001 [3] Baumann, H. and Sandmann, W. (2010). Numerical solution of level dependent quasi-birth-and-death processes. Procedia Comput. Sci. 1, 1561-1569. [4] Baumann, H. and Sandmann, W. (2012). Steady state analysis of level dependent quasi-birth-and-death processes with catastrophes. Comput. Operat. Res. 39, 413-423. · Zbl 1251.90086 [5] Baumann, H. and Sandmann, W. (2013). Computing stationary expectations in level-dependent QBD processes. J. Appl. Prob. 50, 151-165. · Zbl 1273.60089 [6] Baumann, H. and Sandmann, W. (2014). On finite long run costs and rewards in infinite Markov chains. Statist. Prob. Lett. 91, 41-46. · Zbl 1298.60073 [7] Bright, L. and Taylor, P. G. (1995). Calculating the equilibrium distribution in level dependent quasi-birth-and-death processes. Commun. Statist. Stoch. Models 11, 497-525. · Zbl 0837.60081 [8] Chung, K. L. (1960). Markov Chains with Stationary Transition Probabilities . Springer, Berlin. · Zbl 0092.34304 [9] Dayar, T., Sandmann, W., Spieler, D. and Wolf, V. (2011). Infinite level-dependent QBD processes and matrix-analytic solutions for stochastic chemical kinetics. Adv. Appl. Prob. 43, 1005-1026. · Zbl 1233.60042 [10] Foster, F. G. (1953). On the stochastic matrices associated with certain queueing processes. Ann. Math. Statist. 24, 355-360. · Zbl 0051.10601 [11] Gibson, D. and Seneta, E. (1987). Augmented truncations of infinite stochastic matrices. J. Appl. Prob. 24, 600-608. · Zbl 0628.15019 [12] Glynn, P. W. and Zeevi, A. (2008). Bounding stationary expectations of Markov processes. In Markov Processes and Related Topics: A Festschrift for Thomas G. Kurtz (Inst. Math. Statist. Collect. 4 ), Institute of Mathematical Statistics, Beachwood, OH, pp. 195-214. · Zbl 1170.68389 [13] Golub, G. H. and Seneta, E. (1973). Computation of the stationary distribution of an infinite Markov matrix. Bull. Austral. Math. Soc. 8, 333-341. · Zbl 0248.60064 [14] Golub, G. H. and Seneta, E. (1974). Computation of the stationary distribution of an infinite stochastic matrix of special form. Bull. Austral. Math. Soc. 10, 255-261. · Zbl 0274.15013 [15] Hanschke, T. (1999). A matrix continued fraction algorithm for the multiserver repeated order queue. Math. Comput. Modelling 30, 159-170. · Zbl 1042.60539 [16] Latouche, G. and Taylor, P. (2002). Truncation and augmentation of level-independent QBD processes. Stoch. Process. Appl. 99, 53-80. · Zbl 1058.60066 [17] Pakes, A. G. (1969). Some conditions for ergodicity and recurrence of Markov chains. Operat. Res. 17, 1058-1061. · Zbl 0183.46902 [18] Seneta, E. (1981). Nonnegative Matrices and Markov Chains , 2nd edn. Springer, New York. · Zbl 0471.60001 [19] Serfozo, R. (2009). Basics of Applied Stochastic Processes . Springer, Berlin. · Zbl 1159.60001 [20] Thattai, M. and van Oudenaarden, A. (2001). Intrinsic noise in gene regulatory networks. Proc. Nat. Acad. Sci. USA 98, 8614-8619. [21] Tweedie, R. L. (1975). Sufficient conditions for regularity, recurrence and ergodicity of Markov processes. Math. Proc. Camb. Phil. Soc. 78, 125-136. · Zbl 0331.60047 [22] Tweedie, R. L. (1983). The existence of moments for stationary Markov chains. J. Appl. Prob. 20, 191-196. · Zbl 0513.60067 [23] Tweedie, R. L. (1988). Invariant measures for Markov chains with no irreducibility assumptions. A Celebration of Applied Probability (J. Appl. Prob. Spec. Vol. 25A ), Applied Probability Trust, Sheffield, pp. 275-285. · Zbl 0663.60054 [24] Tweedie, R. L. (1998). Truncation approximations of invariant measures for Markov chains. J. Appl. Prob. 35, 517-536. · Zbl 0919.60065 [25] Zhao, Y. Q. and Liu, D. (1996). The censored Markov chain and the best augmentation. J. Appl. Prob. 33, 623-629. · Zbl 0865.60007 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.