zbMATH — the first resource for mathematics

Variable length Markov chains. (English) Zbl 0983.62048
Summary: We study estimation in the class of stationary variable length Markov chains (VLMC) on a finite space. The processes in this class are still Markovian of high-order, but with memory of variable length yielding a much bigger and structurally richer class of models than ordinary high-order Markov chains. From an algorithmic view, the VLMC model class has attracted interest in information theory and machine learning, but statistical properties have not yet been explored. Provided that good estimation is available, the additional structural richness of the model class enhances predictive power by finding a better trade-off between model bias and variance and allowing better structural description which can be of specific interest. The latter is exemplified with some DNA data.
A version of the tree-structured context algorithm, proposed by J. Rissanen [IEEE Trans. Inf. Theory IT-29, 656-664 (1983; Zbl 0521.94010)] in an information theoretical set-up, is shown to have new good asymptotic properties for estimation in the class of VLMCs. This remains true even when the underlying model increases in dimensionality. Furthermore, consistent estimation of minimal state spaces and mixing properties of fitted models are given.
We also propose a new bootstrap scheme based on fitted VLMCs. We show its validity for quite general stationary categorical time series and for a broad range of statistical procedures.

62M05 Markov processes: estimation; hidden Markov models
62B10 Statistical aspects of information-theoretic topics
60J10 Markov chains (discrete-time Markov processes on discrete state spaces)
62G09 Nonparametric statistical resampling methods
62M10 Time series, auto-correlation, regression, etc. in statistics (GARCH)
94A15 Information theory (general)
Full Text: DOI
[1] BICKEL, P. J., GOTZE, F. and VAN ZWET, W. R. 1997. Resampling fewer than n observations: \" gains, losses, and remedies for losses. Statist. Sinica 7 1 32. Z. · Zbl 0927.62043
[2] BRAUN, J. V. and MULLER, H.-G. 1998. Statistical methods for DNA sequence. Statist. Sci. 13 \" 142 162. Z. · Zbl 0960.62121 · doi:10.1214/ss/1028905933
[3] BREIMAN, L.; FRIEDMAN, J. H., OLSHEN, R. A. and STONE, C. J. 1984. Classification and Regression Trees. Wadsworth, Belmont, CA. Z. · Zbl 0541.62042
[4] BRILLINGER, D. R. 1995. Trend analysis: binary-valued and point cases. Stochastic Hydrology and Hydraulics 9 207 213. Z. · Zbl 0842.76070
[5] BUHLMANN, P. 1999. Model selection for variable length Markov chains and tuning the context älgorithm. Ann. Inst. Statist. Math. To appear. Z.
[6] COVER, T. M. and THOMAS, J. A. 1991. Elements of Information Theory. Wiley, New York. Z. · Zbl 0762.94001
[7] DOUKHAN, P. 1994. Mixing Properties and Examples. Lecture Notes in Statist. 85. Springer, Berlin. Z. · Zbl 0801.60027
[8] EFRON, B. 1979. Bootstrap methods: another look at the jackknife. Ann. Statist. 7 1 26. Z. · Zbl 0406.62024 · doi:10.1214/aos/1176344552
[9] FAHRMEIR, L. and TUTZ, G. 1994. Multivariate Statistical Modelling Based on Generalized Linear Models. Springer, Berlin.Z. · Zbl 0809.62064
[10] FEDER, M.; MERHAV, N. and GUTMAN, M. 1992. Universal prediction of individual sequences. IEEE Trans. Inform. Theory IT-38 1258 1270. Z. · Zbl 0775.94076 · doi:10.1109/18.144706
[11] GUTTORP, P. 1995. Stochastic Modeling of Scientific Data. Chapman and Hall, London. Z. · Zbl 0862.60034
[12] IOSIFESCU, M. and THEODORESCU, R. 1969. Random Processes and Learning. Springer, Berlin. · Zbl 0194.51101
[13] KUNSCH, H. R. 1989. The jackknife and the bootstrap for general stationary observations. Ann. \" Statist. 17 1217 1241. Z. · Zbl 0684.62035 · doi:10.1214/aos/1176347265
[14] PRUM, B.; RODOLPHE, F. and DE TURCKHEIM, E. 1995. Finding words with unexpected frequencies in deoxyribonucleic acid sequences. J. Roy. Statist. Soc. Ser. B 57 205 220. Z. JSTOR: · Zbl 0817.92012 · links.jstor.org
[15] RAFTERY, A. and TAVARE, S. 1994. Estimation and modelling repeated patterns in high-order Ḿarkov chains with the mixture transition distribution model. Appl. Statist. 43 179 199. Z. · Zbl 0825.62667 · doi:10.2307/2986120
[16] RAJARSHI, M. B. 1990. Bootstrap in Markov-sequences based on estimates of transition density. Ann. Inst. Statist. Math. 42 253 268. Z. · Zbl 0714.62036 · doi:10.1007/BF00050835
[17] RISSANEN, J. 1983. A universal data compression system. IEEE Trans. Inform. Theory IT-29 656 664. Z. · Zbl 0521.94010 · doi:10.1109/TIT.1983.1056741
[18] RISSANEN, J. 1986. Complexity of strings in the class of Markov sources. IEEE Trans. Inform. Theory IT-32 526 532. Z. · Zbl 0621.94005 · doi:10.1109/TIT.1986.1057210
[19] RISSANEN, J. 1989. Stochastic Complexity in Statistical Inquiry. World Scientific, Singapore. Z. · Zbl 0800.68508
[20] RITOV, Y. and BICKEL, P. J. 1990. Achieving information bounds in nonand semiparametric models. Ann. Statist. 18 925 938. Z. · Zbl 0722.62025 · doi:10.1214/aos/1176347633
[21] WEINBERGER, M. J. and FEDER, M. 1994. Predictive stochastic complexity and model estimation for finite-state processes. J. Statist. Plann. Inference 39 353 372. Z. · Zbl 0804.62008 · doi:10.1016/0378-3758(94)90092-2
[22] WEINBERGER, M. J., LEMPEL, A. and ZIV, J. 1992. A sequential algorithm for the universal coding of finite memory sources. IEEE Trans. Inform. Theory IT-38 1002 1014. Z. · Zbl 0749.94007 · doi:10.1109/18.135641
[23] WEINBERGER, M. J., RISSANEN, J. and FEDER, M. 1995. A universal finite memory source. IEEE Trans. Inform. Theory IT-41 643 652. Z. · Zbl 0820.94002 · doi:10.1109/18.382011
[24] WITHERS, C. S. 1981. Central limit theorems for dependent variables I. Z. Wahrsch. Verw. · Zbl 0451.60027 · doi:10.1007/BF01025872
[25] PHILADELPHIA, PENNSYLVANIA 19104 CH-8092 ZURICH Ë-MAIL: ajw@compstat.wharton.upenn.edu SWITZERLAND E-MAIL: buhlmann@stat.math.ethz.ch
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.