zbMATH — the first resource for mathematics

Convolutional autoencoder and conditional random fields hybrid for predicting spatial-temporal chaos. (English) Zbl 1429.37047
Summary: We present an approach for data-driven prediction of high-dimensional chaotic time series generated by spatially-extended systems. The algorithm employs a convolutional autoencoder for dimension reduction and feature extraction combined with a probabilistic prediction scheme operating in the feature space, which consists of a conditional random field. The future evolution of the spatially-extended system is predicted using a feedback loop and iterated predictions. The excellent performance of this method is illustrated and evaluated using Lorenz-96 systems and Kuramoto-Sivashinsky equations of different size generating time series of different dimensionality and complexity.
©2019 American Institute of Physics

37M10 Time series analysis of dynamical systems
62M10 Time series, auto-correlation, regression, etc. in statistics (GARCH)
Full Text: DOI
[1] Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., and Zheng, X., “TensorFlow: Large-scale machine learning on heterogeneous systems” (2015), see .
[2] Abebe, A. J.; Solomatine, D. P.; Venneker, R. G. W., Application of adaptive fuzzy rule-based models for reconstruction of missing precipitation events, Hydrolog. Sci. J., 45, 425-436 (2000)
[3] Box, G. E. P.; Jenkins, G., Time Series Analysis, Forecasting and Control (1990)
[4] Cheng, Z., Sun, H., Takeuchi, M., and Katto, J., “Deep convolutional autoencoder-based lossy image compression,” in 2018 Picture Coding Symposium, PCS 2018—Proceedings (Institute of Electrical and Electronics Engineers Inc., 2018), pp. 253-257.
[5] Chollet, F.et al., see for “Keras” (2015).
[6] Cox, S.; Matthews, P., Exponential time differencing for stiff systems, J. Comput. Phys., 176, 430-455 (2002) · Zbl 1005.65069
[7] Dong, C., Loy, C. C., He, K., and Tang, X., “Learning a deep convolutional network for image super-resolution,” in Computer Vision—ECCV 2014, edited by D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars (Springer International Publishing, Cham, 2014), pp. 184-199.
[8] Gao, S.; Brekelmans, R.; Steeg, G. V.; Galstyan, A.
[9] Hahnloser, R. H.; Sarpeshkar, R.; Mahowald, M. A.; Douglas, R. J.; Seung, H. S., Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit, Nature, 405, 947 (2000)
[10] He, K., Zhang, X., Ren, S., and Sun, J., “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2016), pp. 770-778; see .
[11] Herzog, S.; Wörgötter, F.; Parlitz, U., Data-driven modeling and prediction of complex spatio-temporal dynamics in excitable media, Front. Appl. Math. Stat., 4, 60 (2018)
[12] Higgins, I.; Matthey, L.; Glorot, X.; Pal, A.; Uria, B.; Blundell, C.; Mohamed, S.; Lerchner, A.
[13] Hinton, G. E.; Salakhutdinov, R. R., Reducing the dimensionality of data with neural networks, Science, 313, 504-507 (2006) · Zbl 1226.68083
[14] Ioffe, S.; Szegedy, C.
[15] Isensee, J.; Datseris, G.; Parlitz, U., Predicting spatio-temporal time series using dimension reduced local states, J. Nonlinear Sci.
[16] Jaderberg, M.; Dalibard, V.; Osindero, S.; Czarnecki, W. M.; Donahue, J.; Razavi, A.; Vinyals, O.; Green, T.; Dunning, I.; Simonyan, K.; Fernando, C.; Kavukcuoglu, K.
[17] Kaplan, J. L. and Yorke, J. A., “Chaotic behavior of multidimensional difference equations,” in Functional Differential Equations and Approximation of Fixed Points, edited by H.-O. Peitgen and H.-O. Walther (Springer, Berlin, 1979), pp. 204-227.
[18] Kingma, D. P.; Ba, J.
[19] Koller, D.; Friedman, N., Probabilistic Graphical Models: Principles and Techniques (Adaptive Computation and Machine Learning Series) (2009)
[20] Krizhevsky, A., Sutskever, I., and Hinton, G. E., “Imagenet classification with deep convolutional neural networks,” in Proceedings of the 25th International Conference on Neural Information Processing Systems—Volume 1, series and number NIPS’12 (Curran Associates Inc., 2012), pp. 1097-1105.
[21] Kuramoto, Y., Diffusion-induced chaos in reaction systems, Prog. Theor. Phys. Suppl., 64, 346-367 (1978)
[22] Lafferty, J., McCallum, A., and Pereira, F., “Conditional random fields: Probabilistic models for segmenting and labeling sequence data,” in Proceedings of the 18th International Conference on Machine Learning (Morgan Kaufmann Publishers Inc., 2001), pp. 282-289.
[23] LeCun, Y., Boser, B. E., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W. E., and Jackel, L. D., “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems 2, edited by D. S. Touretzky (Morgan-Kaufmann, 1990), pp. 396-404.
[24] Lorenz, E., “Predictability: a problem partly solved,” in Seminar on Predictability, 4-8 September 1995, ECMWF (ECMWF, Shinfield Park, Reading, 1995), Vol. 1, pp. 1-18.
[25] Lorenz, E. N., Deterministic nonperiodic flow, J. Atmos. Sci., 20, 130-141 (1963) · Zbl 1417.37129
[26] Maas, A. L., Hannun, A. Y., and Ng, A. Y., “Rectifier nonlinearities improve neural network acoustic models,” in Proceedings of the 30th International Conference on Machine Learning (Atlanta, Georgia, 2013); see also: .
[27] McCallum, A., “Efficiently inducing features of conditional random fields,” in Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, series and number UAI’03 (Morgan Kaufmann Publishers Inc., San Francisco, CA, 2003), pp. 403-410.
[28] Nassar, J., Linderman, S., Bugallo, M., and Park, I. M., “Tree-structured recurrent switching linear dynamical systems for multi-scale modeling,” in International Conference on Learning Representations (International Conference on Learning Representations (ICLR), 2019).
[29] Paraschos, A., Daniel, C., Peters, J. R., and Neumann, G., “Probabilistic movement primitives,” in Advances in Neural Information Processing Systems 26, edited by C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (Curran Associates, Inc., 2013), pp. 2616-2624.
[30] Parlitz, U.; Merkwirth, C., Prediction of spatiotemporal time series based on reconstructed local states, Phys. Rev. Lett., 84, 1890-1893 (2000)
[31] Pathak, J.; Hunt, B.; Girvan, M.; Lu, Z.; Ott, E., Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach, Phys. Rev. Lett., 120, 024102 (2018)
[32] Pavlovski, M., Zhou, F., Arsov, N., Kocarev, L., and Obradovic, Z., “Generalization-aware structured regression towards balancing bias and variance,” in Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18 (International Joint Conferences on Artificial Intelligence Organization, 2018), pp. 2616-2622.
[33] Penkovsky, B.; Porte, X.; Jacquot, M.; Larger, L.; Brunner, D., Coupled nonlinear delay systems as deep convolutional neural networks, Phys. Rev. Lett., 123, 054101 (2019)
[34] Petzold, L., Automatic selection of methods for solving stiff and nonstiff systems of ordinary differential equations, SIAM J. Sci. Stat. Comput., 4, 136-148 (1983) · Zbl 0518.65051
[35] Pinheiro, F. R.; van Leeuwen, P. J.; Parlitz, U., An ensemble framework for time delay synchronization, Q. J. R. Meteorolog. Soc., 144, 305-316 (2018)
[36] Quattoni, A.; Wang, S.; Morency, L.; Collins, M.; Darrell, T., Hidden conditional random fields, IEEE Trans. Pattern Anal. Mach. Intell., 29, 1848-1852 (2007)
[37] Rey, D.; Eldridge, M.; Kostuk, M.; Abarbanel, H. D.; Schumann-Bischoff, J.; Parlitz, U., Accurate state and parameter estimation in nonlinear systems with sparse observations, Phys. Lett. A, 378, 869-873 (2014) · Zbl 1293.93722
[38] van Rossum, G.; Drake, F. L., The Python Language Reference Manual (2011)
[39] Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D.
[40] Sivashinsky, G., Nonlinear analysis of hydrodynamic instability in laminar flames-i. derivation of basic equations, Acta. Astronaut., 4, 1177-1206 (1977) · Zbl 0427.76047
[41] Sivashinsky, G. I., On flame propagation under conditions of stoichiometry, SIAM J. Appl. Math., 39, 67-82 (1980) · Zbl 0464.76055
[42] Sivashinsky, G. I.; Michelson, D. M., On irregular wavy flow of a liquid film down a vertical plane, Progr. Theor. Phys., 63, 2112-2114 (1980)
[43] Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R., Dropout: A simple way to prevent neural networks from overfitting, J. Mach. Learn. Res., 15, 1929-1958 (2014) · Zbl 1318.68153
[44] Sutton, C.; McCallum, A., An introduction to conditional random fields, Found. Trends Mach. Learn., 4, 267-373 (2012) · Zbl 1253.68001
[45] Tschannen, M.; Bachem, O.; Lucic, M.
[46] Vemuri, V., Artificial Neural Networks: Theoretical Concepts, Computer Society Press Technology Series: Neural networks (Computer Society Press of the IEEE, 1988).
[47] Vlachas, P.; Byeon, W.; Yi Wan, Z.; Sapsis, T. P.; Koumoutsakos, P., Data-driven forecasting of high-dimensional chaotic systems with long-short term memory networks, Proc. R. Soc. A Math. Phys. Eng. Sci., 474, 1 (2018) · Zbl 1402.92030
[48] Zhang, W.; Wang, B.; Ye, Z.; Quan, J., Efficient method for limit cycle flutter analysis based on nonlinear aerodynamic reduced-order models, AIAA J., 50, 1019-1028 (2012)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.