×

Lumpings of Markov chains, entropy rate preservation, and higher-order lumpability. (English) Zbl 1309.60077

Summary: A lumping of a Markov chain is a coordinatewise projection of the chain. We characterise the entropy rate preservation of a lumping of an aperiodic and irreducible Markov chain on a finite state space by the random growth rate of the cardinality of the realisable preimage of a finite-length trajectory of the lumped chain and by the information needed to reconstruct original trajectories from their lumped images. Both are purely combinatorial criteria, depending only on the transition graph of the Markov chain and the lumping function. A lumping is strongly \(k\)-lumpable, if and only if the lumped process is a \(k\)-th-order Markov chain for each starting distribution of the original Markov chain. We characterise strong \(k\)-lumpability via tightness of stationary entropic bounds. In the sparse setting, we give sufficient conditions on the lumping to both preserve the entropy rate and be strongly \(k\)-lumpable.

MSC:

60J10 Markov chains (discrete-time Markov processes on discrete state spaces)
60G17 Sample path properties
60G10 Stationary stochastic processes
94A17 Measures of information, entropy
65C40 Numerical analysis or methods applied to Markov chains
PDFBibTeX XMLCite
Full Text: arXiv Euclid

References:

[1] Anderson, B. D. O. (1999). The realization problem for hidden Markov models. Math. Control Signals Systems 12, 80-120. · Zbl 0949.93015 · doi:10.1007/PL00009846
[2] Brown, P. F. et al. (1992). Class-based \(n\)-gram models of natural language. Comput. Linguist. 18, 467-479.
[3] Blackwell, D. (1957). The entropy of functions of finite-state Markov chains. In Trans 1st Prague Conf. Inf. Theory, Statist. Decision Functions, (Liblice, 1956) . Publishing House of the Czechoslovak Academy of Sciences, Prague, pp. 13-20. · Zbl 0085.12401
[4] Blackwell, D. and Koopmans, L. (1957). On the identifiability problem for functions of finite Markov chains. Ann. Math. Statist. 28, 1011-1015. · Zbl 0080.34901 · doi:10.1214/aoms/1177706802
[5] Burke, C. J. and Rosenblatt, M. (1958). A Markovian function of a Markov chain. Ann. Math. Statist. 29, 1112-1122. · Zbl 0100.34402 · doi:10.1214/aoms/1177706444
[6] Carlyle, J. W. (1967). Identification of state-calculable functions of finite Markov chains. Ann. Math. Statist. 38, 201-205. · Zbl 0171.16103 · doi:10.1214/aoms/1177699071
[7] Cover, T. M. and Thomas, J. A. (2006). Elements of Information Theory , 2nd edn. John Wiley, Hoboken, NJ. · Zbl 1140.94001 · doi:10.1002/047174882X
[8] Ephraim, Y. and Merhav, N. (2002). Hidden Markov processes. IEEE Trans. Inf. Theory 48, 1518-1569. · Zbl 1061.94560 · doi:10.1109/TIT.2002.1003838
[9] Geiger, B. C. and Kubin, G. (2011). Some results on the information loss in dynamical systems. In Proc. IEEE Internat. Symp. Wireless Commun. Systems (ISWSC) , IEEE, New York, pp. 794-798, 2011. Extended version available at http://uk.arxiv.org/abs/1106.2404.
[10] Geiger, B. C. and Temmel, C. (2013). Information-preserving Markov aggregation. In Proc. IEEE Information Theory Workshop (ITW) , IEEE, New York, pp. 258-262. Extended version available at http://uk.arxiv.org/ abs/1304.0920.
[11] Gilbert, E. J. (1959). On the identifiability problem for functions of finite Markov chains. Ann. Math. Statist. 30, 688-697. · Zbl 0089.34503 · doi:10.1214/aoms/1177706199
[12] Gray, R. M. (1990). Entropy and Information Theory . Springer, New York. · Zbl 0722.94001
[13] Gurvits, L. and Ledoux, J. (2005). Markov property for a function of a Markov chain: a linear algebra approach. Linear Algebra Appl. 404, 85-117. · Zbl 1077.15017 · doi:10.1016/j.laa.2005.02.007
[14] Heiner, M., Rohr, C., Schwarick, M. and Streif, S. (2010). A comparative study of stochastic analysis techniques. In Proc. 8th Internat. Conf. Comput. Meth. Systems Biol. , ACM, New York, pp. 96-106.
[15] Heller, A. (1965). On stochastic processes derived from Markov chains. Ann. Math. Statist. 36, 1286-1291. · Zbl 0139.34603 · doi:10.1214/aoms/1177700000
[16] Henzinger, T. A., Mikeev, L., Mateescu, M. and Wolf, V. (2010). Hybrid numerical solution of the chemical master equation. In Proc. 8th Internat. Conf. Comput. Meth. Systems Biol. , ACM, New York, pp. 55-65.
[17] Kemeny, J. G. and Snell, J. L. (1976). Finite Markov Chains . Springer, New York. · Zbl 0328.60035
[18] Kieffer, J. C. and Rahe, M. (1981). Markov channels are asymptotically mean stationary. SIAM J. Math. Anal. 12, 293-305. · Zbl 0475.94008 · doi:10.1137/0512027
[19] Lindqvist, B. (1978). On the loss of information incurred by lumping states of a Markov chain. Scand. J. Statist. 5, 92-98. · Zbl 0377.62047
[20] Parzen, E. (1999). Stochastic Processes (Classics Appl. Math. 24 ). Society for Industrial and Applied Mathematics, Philadelphia, PA.
[21] Pinsker, M. S. (1964). Information and Information Stability of Random Variables and Processes . Holden-Day, San Francisco, CA. · Zbl 0125.09202
[22] Rogers, L. C. G. and Pitman, J. W. (1981). Markov functions. Ann. Prob. 9, 573-582. · Zbl 0466.60070 · doi:10.1214/aop/1176994363
[23] Sarukkai, R. R. (2000). Link prediction and path analysis using Markov chains. Comput. Networks 33, 377-386.
[24] Watanabe, S. and Abraham, C. T. (1960). Loss and recovery of information by coarse observation of stochastic chain. Inf. Control 3, 248-278. · Zbl 0094.32001 · doi:10.1016/S0019-9958(60)90863-9
[25] Wilkinson, D. J. (2006). Stochastic Modelling for Systems Biology . Chapman & Hall/CRC, Boca Raton, FL. · Zbl 1099.92004
[26] Woess, W. (2009). Denumerable Markov chains. Generating Functions, Boundary Theory, Random Walks on Trees . European Mathematical Society, Zürich. · Zbl 1219.60001
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.