×

Decomposition of neurological multivariate time series by state space modelling. (English) Zbl 1221.92020

Summary: Decomposition of multivariate time series data into independent source components forms an important part of preprocessing and analysis of time-resolved data in neuroscience. We briefly review the available tools for this purpose, such as Factor Analysis (FA) and Independent Component Analysis (ICA), then we show how linear state space modelling, a methodology from statistical time series analysis, can be employed for the same purpose. State space modelling, a generalization of classical ARMA modelling, is well suited for exploiting the dynamical information encoded in the temporal ordering of time series data, while this information remains inaccessible to FA and most ICA algorithms. As a result, much more detailed decompositions become possible, and both components with sharp power spectrum, such as alpha components, sinusoidal artifacts, or sleep spindles, and with broad power spectrum, such as FMRI scanner artifacts or epileptic spiking components, can be separated, even in the absence of prior information.
In addition, three generalizations are discussed, the first relaxing the independence assumption, the second introducing non-stationarity of the covariance of the noise driving the dynamics, and the third allowing for non-Gaussianity of the data through a non-linear observation function. Three application examples are presented, one electrocardigram time series and two electroencephalogram (EEG) time series. The two EEG examples, both from epilepsy patients, demonstrate the separation and removal of various artifacts, including hum noise and FMRI scanner artifacts, and the identification of sleep spindles, epileptic foci, and spiking components. Decompositions obtained by two ICA algorithms are shown for comparison.

MSC:

92C20 Neural biology
62M10 Time series, auto-correlation, regression, etc. in statistics (GARCH)
62H25 Factor analysis and principal components; correspondence analysis
62P10 Applications of statistics to biology and medical sciences; meta analysis
92C55 Biomedical imaging and signal processing

Software:

ICALAB; ARfit; FastICA
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Aït-Sahalia, Y., & Kimmel, R. (2007). Maximum likelihood estimation of stochastic volatility models. J. Financ. Econ., 83, 413–452. · doi:10.1016/j.jfineco.2005.10.006
[2] Akaike, H. (1974a). Markovian representation of stochastic processes and its application to the analysis of autoregressive moving average processes. Ann. Inst. Stat. Math., 26, 363–387. · Zbl 0335.62058 · doi:10.1007/BF02479833
[3] Akaike, H. (1974b). A new look at the statistical model identification. IEEE Trans. Autom. Control, 19, 716–723. · Zbl 0314.62039 · doi:10.1109/TAC.1974.1100705
[4] Akaike, H., & Nakagawa, T. (1988). Statistical analysis and control of dynamic systems. Dordrecht: Kluwer Academic. · Zbl 0691.93022
[5] Allen, P. J., Josephs, O., & Turner, R. (2000). A method for removing imaging artifact from continuous EEG recorded during functional MRI. NeuroImage, 12, 230–239. · doi:10.1006/nimg.2000.0599
[6] Åström, K. J. (1980). Maximum likelihood and prediction error methods. Automatica, 16, 551–574. · Zbl 0441.93036 · doi:10.1016/0005-1098(80)90078-3
[7] Attias, H., & Schreiner, C. E. (1998). Blind source separation and deconvolution: the dynamic component analysis algorithm. Neural Comput., 10, 1373–1424. · doi:10.1162/neco.1998.10.6.1373
[8] Baldick, R. (Ed.) (2006). Applied optimization: formulation and algorithms for engineering systems. Cambridge: Cambridge University Press. · Zbl 1127.65032
[9] Bar-Shalom, Y., & Fortmann, T. (1988). Tracking and data association. San Diego: Academic Press. · Zbl 0634.93001
[10] Barros, A. K., & Cichocki, A. (2001). Extraction of specific signals with temporal structure. Neural Comput., 13, 1995–2000. · Zbl 1003.94505 · doi:10.1162/089976601750399272
[11] Basilevsky, A. (1994). Statistical factor analysis and related methods: theory and applications. New York: Wiley-Interscience. · Zbl 1130.62341
[12] Beckmann, C., & Smith, S. (2004). Probabilistic independent component analysis for functional magnetic resonance imaging. IEEE Trans. Med. Imaging, 23, 137–152. · doi:10.1109/TMI.2003.822821
[13] Beckmann, C., & Smith, S. (2005). Tensorial extensions of independent component analysis for multisubject FMRI analysis. NeuroImage, 25, 294–311. · doi:10.1016/j.neuroimage.2004.10.043
[14] Belouchrani, A., Abed-Meraim, K., Cardoso, J.-F., & Moulines, E. (1997). A blind source separation technique using second order statistics. IEEE Trans. Signal Process., 45, 434–444. · doi:10.1109/78.554307
[15] Bollerslev, T. (1986). Generalized autoregressive conditional heteroskedasticity. J. Econom., 31, 307–327. · Zbl 0616.62119 · doi:10.1016/0304-4076(86)90063-1
[16] Box, G. E. P., & Jenkins, G. M. (1970). Time series analysis, forecasting and control. San Francisco: Holden-Day. · Zbl 0249.62009
[17] Brockwell, P. J., & Davis, R. A. (1987). Time series: theory and methods. Berlin: Springer. · Zbl 0604.62083
[18] Cheung, Y. M., & Xu, L. (2003). Dual multivariate auto-regressive modeling in state space for temporal signal separation. IEEE Trans. Syst. Man Cybern., 33, 386–398.
[19] Choi, S., Cichocki, A., Park, H., & Lee, S. (2005). Blind source separation and independent component analysis: a review. Neural Inf. Process. Lett. Rev., 6, 1–57. · Zbl 1119.81021 · doi:10.1007/s11128-006-0037-y
[20] Chui, C. K., & Chen, G. (1999). Springer series in information sciences : Vol. 17. Kalman filtering: with real-time applications (3rd ed.). Berlin: Springer.
[21] Cichocki, A., & Amari, S. (2002). Adaptive blind signal and image processing. Chichester: Wiley. · Zbl 0999.93013
[22] Comon, P. (1994). Independent component analysis, a new concept? Signal Process., 36, 287–314. · Zbl 0791.62004 · doi:10.1016/0165-1684(94)90029-9
[23] Delorme, A., Sejnowski, T., & Makeig, S. (2007). Enhanced detection of artifacts in EEG data using higher-order statistics and independent component analysis. NeuroImage, 34, 1443–1449. · doi:10.1016/j.neuroimage.2006.11.004
[24] Durbin, J., & Koopman, S. J. (2001). Time series analysis by state space methods. Oxford: Oxford University Press. · Zbl 0995.62504
[25] Dyrholm, M., Makeig, S., & Hansen, L. K. (2007). Model selection for convolutive ICA with an application to spatiotemporal analysis of EEG. Neural Comput., 19, 934–955. · Zbl 1210.94033 · doi:10.1162/neco.2007.19.4.934
[26] Engle, R. F., & Watson, M. (1981). A one-factor multivariate time series model of metropolitan wage rates. J. Am. Stat. Assoc., 76, 774–781. · doi:10.1080/01621459.1981.10477720
[27] Galka, A., Yamashita, O., & Ozaki, T. (2004). GARCH modelling of covariance in dynamical estimation of inverse solutions. Phys. Lett. A, 333, 261–268. · Zbl 1123.62317 · doi:10.1016/j.physleta.2004.10.045
[28] Galka, A., Ozaki, T., Bosch-Bayard, J., & Yamashita, O. (2006). Whitening as a tool for estimating mutual information in spatiotemporal data sets. J. Stat. Phys., 124, 1275–1315. · Zbl 1108.62090 · doi:10.1007/s10955-006-9131-x
[29] Galka, A., Wong, K., & Ozaki, T. (2010). Generalized state space models for modeling non-stationary EEG time series. In A. Steyn-Ross & M. Steyn-Ross (Eds.), Springer series in computational neuroscience. Modeling phase transitions in the brain (pp. 27–52). Berlin: Springer.
[30] Gevers, M. (2006). A personal view of the development of system identification. IEEE Control Syst. Mag., 26, 93–105. · doi:10.1109/MCS.2006.252834
[31] Gnedenko, B. V. (1969). The theory of probability. Moscow: Mir Publishers. · Zbl 0191.46702
[32] Grewal, M. S., & Andrews, A. P. (2001). Kalman filtering: theory and practice using MATLAB. New York: Wiley-Interscience. · Zbl 1322.93001
[33] Harman, H. H. (1976). Modern factor analysis (3rd ed.). Chicago: University of Chicago Press. · Zbl 0161.39805
[34] Harvey, A., Koopman, S. J., & Shephard, N. (Eds.) (2004). State space and unobserved component models. Cambridge: Cambridge University Press. · Zbl 1053.62006
[35] Hyvärinen, A. (1999). Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. Neural Netw., 10, 626–634. · doi:10.1109/72.761722
[36] Hyvärinen, A., Karhunen, J., & Oja, E. (2001). Independent component analysis. New York: Wiley.
[37] James, C., & Hesse, C. (2005). Independent component analysis for biomedical signals. Physiol. Meas., 26, R15–R39. · doi:10.1088/0967-3334/26/1/R02
[38] Jung, A., & Kaiser, A. (2003). Considering temporal structures in independent component analysis. In: Proc. 4th int. symp. ICA BSS, ICA 2003 (pp. 95–100). Nara, Japan, Apr. 2003.
[39] Jung, T.-P., Makeig, S., McKeown, M., Bell, A., Lee, T.-W., & Sejnowski, T. (2001). Imaging brain dynamics using independent component analysis. IEEE Proc., 88, 1107–1122. · doi:10.1109/5.939827
[40] Kailath, T. (1968). An innovations approach to least-squares estimation–Part I: linear filtering in additive white noise. IEEE Trans. Autom. Control, 13, 646–655. · doi:10.1109/TAC.1968.1099025
[41] Kailath, T. (1980). Information and system sciences series. Linear systems. Englewood Cliffs: Prentice-Hall. · Zbl 0454.93001
[42] Kallenberg, O. (2002). Foundations of modern probability. Berlin: Springer. · Zbl 0996.60001
[43] Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. J. Basic Eng., 82, 35–45. · doi:10.1115/1.3662552
[44] Kalman, R. E., Falb, P. L., & Arbib, M. A. (1969). International series in pure and applied mathematics. Topics in mathematical system theory. New York: McGraw-Hill. · Zbl 0231.49001
[45] Ljung, L. (1999). System identification: theory for the user (2nd ed.). Englewood Cliffs: Prentice-Hall.
[46] Mehra, R. K. (1971). Identification of stochastic linear systems using Kalman filter representation. AIAA J., 9, 28–31. · Zbl 0249.93047 · doi:10.2514/3.6120
[47] Mehra, R. K. (1974). Identification in control and econometrics: similarities and differences. Ann. Econ. Soc. Meas., 3, 21–47.
[48] Meinecke, F., Ziehe, A., Kawanabe, M., & Müller, K.-R. (2002). A resampling approach to estimate the stability of one- or multidimensional independent components. IEEE Trans. Biomed. Eng., 49, 1514–1525. · doi:10.1109/TBME.2002.805480
[49] Miwakeichi, F., Martínez-Montes, E., Valdés-Sosa, P., Nishiyama, N., Mizuhara, H., & Yamaguchi, Y. (2004). Decomposing EEG data into space-time-frequency components using parallel factor analysis. NeuroImage, 22, 1035–1045. · doi:10.1016/j.neuroimage.2004.03.039
[50] Molenaar, P. C. (1985). A dynamic factor model for the analysis of multivariate time series. Psychometrika, 50, 181–202. · Zbl 0603.62099 · doi:10.1007/BF02294246
[51] Molgedey, L., & Schuster, H. G. (1994). Separation of a mixture of independent signals using time delayed correlations. Phys. Rev. Lett., 72, 3634–3637. · doi:10.1103/PhysRevLett.72.3634
[52] Negishi, M., Abildgaard, M., Nixon, T., & Constable, R. (2004). Removal of time-varying gradient artifacts from EEG data acquired during continuous fMRI. Clin. Neurophysiol., 115, 2181–2192. · doi:10.1016/j.clinph.2004.04.005
[53] Neumaier, A., & Schneider, T. (2001). Estimation of parameters and eigenmodes of multivariate autoregressive models. ACM Trans. Math. Softw., 27, 27–57. · Zbl 1070.65503 · doi:10.1145/382043.382304
[54] Niazy, R., Beckmann, C., Iannetti, D., Brady, J., & Smith, S. (2005). Removal of FMRI environment artifacts from EEG data using optimal basis sets. NeuroImage, 28, 720–737. · doi:10.1016/j.neuroimage.2005.06.067
[55] Otter, P. (1986). Dynamic structural systems under indirect observation: identifiability and estimation aspects from a system theoretic perspective. Psychometrika, 51, 415–428. · Zbl 0621.62107 · doi:10.1007/BF02294064
[56] Ozaki, T., & Iino, M. (2001). An innovation approach to non-Gaussian time series analysis. J. Appl. Probab., 38, 78–92. · Zbl 1008.62086 · doi:10.1239/jap/1085496593
[57] Pagan, A. R. (1975). A note on the extraction of components from time series. Econometrica, 43, 163–168. · Zbl 0291.62124 · doi:10.2307/1913421
[58] Pearlmutter, B. A., & Parra, L. C. (1997). Maximum likelihood blind source separation: a context-sensitive generalization of ICA. In M. C. Mozer, M. I. Jordan & T. Petsche (Eds.), Advances in neural information processing systems (Vol. 9, pp. 613–619). Cambridge: MIT Press.
[59] Protter, P. (1990). Stochastic integration and differential equations. Berlin: Springer. · Zbl 0694.60047
[60] Rauch, H. E., Tung, G., & Striebel, C. T. (1965). Maximum likelihood estimates of linear dynamic systems. AIAA J., 3, 1445–1450. · doi:10.2514/3.3166
[61] Schwarz, G. (1978). Estimating the dimension of a model. Ann. Stat., 6, 461–464. · Zbl 0379.62005 · doi:10.1214/aos/1176344136
[62] Schweppe, F. (1965). Evaluation of likelihood functions for Gaussian signals. IEEE Trans. Inf. Theory, 11, 61–70. · Zbl 0127.10805 · doi:10.1109/TIT.1965.1053737
[63] Sorenson, H. W. (1970). Least-squares estimation: from Gauss to Kalman. IEEE Spectr., 7, 63–68. · doi:10.1109/MSPEC.1970.5213471
[64] Stögbauer, H., Kraskov, A., Astakhov, S. A., & Grassberger, P. (2004). Least-dependent-component analysis based on mutual information. Phys. Rev. E, 70, 066123. · doi:10.1103/PhysRevE.70.066123
[65] Tong, L., Liu, R., Soon, V. C., & Huang, Y. (1991). Indeterminacy and identifiability of blind separation. IEEE Trans. Circuits Syst., 38, 499–509. · Zbl 0737.93016 · doi:10.1109/31.76486
[66] Vigário, R., Sarela, J., Jousmiki, V., Hamalainen, M., & Oja, E. (2000). Independent component approach to the analysis of EEG and MEG recordings. IEEE Trans. Biomed. Eng., 47, 589–593. · doi:10.1109/10.841330
[67] Waheed, K., & Salem, F. M. (2005). Linear state space feedforward and feedback structures for blind source recovery in dynamic environments. Neural Process. Lett., 22, 325–344. · Zbl 05028187 · doi:10.1007/s11063-005-1000-0
[68] Wong, K. F. K., Galka, A., Yamashita, O., & Ozaki, T. (2006). Modelling non-stationary variance in EEG time series by state space GARCH model. Comput. Biol. Med., 36, 1327–1335. · doi:10.1016/j.compbiomed.2005.10.001
[69] Zhang, L., & Cichocki, A. (2000). Blind deconvolution of dynamical systems: a state space approach. J. Signal Process., 4, 111–130.
[70] Ziehe, A., & Müller, K.-R. (1998). TDSEP–an efficient algorithm for blind separation using time structure. In L. Niklasson, M. Bodén & T. Ziemke (Eds.), Proc. 8th int. conf. artificial neural networks, ICANN’98 (pp. 675–680). Berlin: Springer.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.