zbMATH — the first resource for mathematics

Variable selection in time series forecasting using random forests. (English) Zbl 07052052
Summary: Time series forecasting using machine learning algorithms has gained popularity recently. Random forest is a machine learning algorithm implemented in time series forecasting; however, most of its forecasting properties have remained unexplored. Here we focus on assessing the performance of random forests in one-step forecasting using two large datasets of short time series with the aim to suggest an optimal set of predictor variables. Furthermore, we compare its performance to benchmarking methods. The first dataset is composed by 16,000 simulated time series from a variety of Autoregressive Fractionally Integrated Moving Average (ARFIMA) models. The second dataset consists of 135 mean annual temperature time series. The highest predictive performance of RF is observed when using a low number of recent lagged predictor variables. This outcome could be useful in relevant future applications, with the prospect to achieve higher predictive accuracy.

62 Statistics
68 Computer science
Full Text: DOI
[1] Shmueli, G.; To explain or to predict?; Stat. Sci.: 2010; Volume 25 ,289-310. · Zbl 1329.62045
[2] Bontempi, G.; Taieb, S.B.; Le Borgne, Y.A.; Machine learning strategies for time series forecasting; Business Intelligence (Lecture Notes in Business Information Processing): Berlin/Heidelberg, Germany 2013; Volume Volume 138 ,62-77.
[3] De Gooijer, J.G.; Hyndman, R.J.; 25 years of time series forecasting; Int. J. Forecast.: 2006; Volume 22 ,443-473.
[4] Fildes, R.; Nikolopoulos, K.; Crone, S.F.; Syntetos, A.A.; Forecasting and operational research: A review; J. Oper. Res. Soc.: 2008; Volume 59 ,1150-1172. · Zbl 1153.90009
[5] Weron, R.; Electricity price forecasting: A review of the state-of-the-art with a look into the future; Int. J. Forecast.: 2014; Volume 30 ,1030-1081.
[6] Hong, T.; Fan, S.; Probabilistic electric load forecasting: A tutorial review; Int. J. Forecast.: 2016; Volume 32 ,914-938.
[7] Taieb, S.B.; Bontempi, G.; Atiya, A.F.; Sorjamaa, A.; A review and comparison of strategies for multi-step ahead time series forecasting based on the NN5 forecasting competition; Expert Syst. Appl.: 2012; Volume 39 ,7067-7083.
[8] Mei-Ying, Y.; Xiao-Dong, W.; Chaotic time series prediction using least squares support vector machines; Chin. Phys.: 2004; Volume 13 ,454-458.
[9] Faraway, J.; Chatfield, C.; Time series forecasting with neural networks: A comparative study using the air line data; J. R. Stat. Soc. C Appl. Stat.: 1998; Volume 47 ,231-250.
[10] Yang, B.S.; Oh, M.S.; Tan, A.C.C.; Machine condition prognosis based on regression trees and one-step-ahead prediction; Mech. Syst. Signal Process.: 2008; Volume 22 ,1179-1193.
[11] Zou, H.; Yang, Y.; Combining time series models for forecasting; Int. J. Forecast.: 2004; Volume 20 ,69-84.
[12] Papacharalampous, G.A.; Tyralis, H.; Koutsoyiannis, D.; Forecasting of geophysical processes using stochastic and machine learning algorithms; Proceedings of the 10th World Congress of EWRA on Water Resources and Environment “Panta Rhei”: ; . · Zbl 07052052
[13] Pérez-Rodríguez, J.V.; Torra, S.; Andrada-Félix, J.; STAR and ANN models: Forecasting performance on the Spanish “Ibex-35” stock index; J. Empir. Financ.: 2005; Volume 12 ,490-509.
[14] Khashei, M.; Bijari, M.; A novel hybridization of artificial neural networks and ARIMA models for time series forecasting; Appl. Soft Comput.: 2011; Volume 11 ,2664-2675.
[15] Yan, W.; Toward automatic time-series forecasting using neural networks; IEEE Trans. Neural Netw. Lear. Stat.: 2012; Volume 23 ,1028-1039.
[16] Babu, C.N.; Reddy, B.E.; A moving-average filter based hybrid ARIMA-ANN model for forecasting time series data; Appl. Soft Comput.: 2014; Volume 23 ,27-38.
[17] Lin, L.; Wang, F.; Xie, X.; Zhong, S.; Random forests-based extreme learning machine ensemble for multi-regime time series prediction; Expert Syst. Appl.: 2017; Volume 85 ,164-176.
[18] Breiman, L.; Random Forests; Mach. Learn.: 2001; Volume 45 ,5-32. · Zbl 1007.68152
[19] Scornet, E.; Biau, G.; Vert, J.P.; Consistency of random forests; Ann. Stat.: 2015; Volume 43 ,1716-1741. · Zbl 1317.62028
[20] Biau, G.; Scornet, E.; A random forest guided tour; Test: 2016; Volume 25 ,197-227. · Zbl 1402.62133
[21] Hastie, T.; Tibshirani, R.; Friedman, J.; ; The Elements of Statistical Learning: New York, NY, USA 2009; . · Zbl 1273.62005
[22] Verikas, A.; Gelzinis, A.; Bacauskiene, M.; Mining data with random forests: A survey and results of new tests; Pattern Recognit.: 2011; Volume 44 ,330-349.
[23] Herrera, M.; Torgo, L.; Izquierdo, J.; Pérez-García, R.; Predictive models for forecasting hourly urban water demand; J. Hydrol.: 2010; Volume 387 ,141-150.
[24] Dudek, G.; Short-term load forecasting using random forests; Proceedings of the 7th IEEE International Conference Intelligent Systems IS’2014 (Advances in Intelligent Systems and Computing): Cham, Switzerland 2015; Volume Volume 323 ,821-828.
[25] Chen, J.; Li, M.; Wang, W.; Statistical uncertainty estimation using random forests and its application to drought forecast; Math. Probl. Eng.: 2012; Volume 2012 ,915053. · Zbl 1264.86013
[26] Naing, W.Y.N.; Htike, Z.Z.; Forecasting of monthly temperature variations using random forests; APRN J. Eng. Appl. Sci.: 2015; Volume 10 ,10109-10112.
[27] Nguyen, T.T.; Huu, Q.N.; Li, M.J.; Forecasting time series water levels on Mekong river using machine learning models; Proceedings of the 2015 Seventh International Conference on Knowledge and Systems Engineering (KSE): ; .
[28] Kumar, M.; Thenmozhi, M.; Forecasting stock index movement: A comparison of support vector machines and random forest; Indian Institute of Capital Markets 9th Capital Markets Conference Paper: Vashi, India 2006; .
[29] Kumar, M.; Thenmozhi, M.; Forecasting stock index returns using ARIMA-SVM, ARIMA-ANN, and ARIMA-random forest hybrid models; Int. J. Bank. Acc. Financ.: 2014; Volume 5 ,284-308.
[30] Kane, M.J.; Price, N.; Scotch, M.; Rabinowitz, P.; Comparison of ARIMA and Random Forest time series models for prediction of avian influenza H5N1 outbreaks; BMC Bioinform.: 2014; .
[31] Genuer, R.; Poggi, J.M.; Tuleau-Malot, C.; Variable selection using random forests; Pattern Recognit. Lett.: 2010; Volume 31 ,2225-2236.
[32] Oshiro, T.M.; Perez, P.S.; Baranauskas, J.A.; How many trees in a random forest?; Machine Learning and Data Mining in Pattern Recognition (Lecture Notes in Computer Science): Berlin/Heidelberg, Germany 2012; ,154-168.
[33] Probst, P.; Boulesteix, A.L.; To tune or not to tune the number of trees in random forest?; arXiv: 2017; . · Zbl 06982937
[34] Kuhn, M.; Johnson, K.; ; Applied Predictive Modeling: New York, NY, USA 2013; . · Zbl 1306.62014
[35] Díaz-Uriarte, R.; De Andres, S.A.; Gene selection and classification of microarray data using random forest; BMC Bioinform.: 2006; Volume 7 .
[36] Makridakis, S.; Hibon, M.; Confidence intervals: An empirical investigation of the series in the M-competition; Int. J. Forecast.: 1987; Volume 3 ,489-508.
[37] Makridakis, S.; Hibon, M.; The M3-Competition: Results, conclusions and implications; Int. J. Forecast.: 2000; Volume 16 ,451-476.
[38] Pritzsche, U.; Benchmarking of classical and machine-learning algorithms (with special emphasis on bagging and boosting approaches) for time series forecasting; Master’s Thesis: München, Germany 2015; .
[39] Bagnall, A.; Cawley, G.C.; On the use of default parameter settings in the empirical evaluation of classification algorithms; arXiv: 2017; .
[40] Salles, R.; Assis, L.; Guedes, G.; Bezerra, E.; Porto, F.; Ogasawara, E.; A framework for benchmarking machine learning methods using linear models for univariate time series prediction; Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN): ; ,2338-2345.
[41] Bontempi, G.; Machine Learning Strategies for Time Series Prediction; 2013; .
[42] McShane, B.B.; Machine Learning Methods with Time Series Dependence; Ph.D. Thesis: Philadelphia, PA, USA 2010; .
[43] Bagnall, A.; Bostrom, A.; Large, J.; Lines, J.; Simulated data experiments for time series classification part 1: Accuracy comparison with default settings; arXiv: 2017; .
[44] Box, G.E.P.; Jenkins, G.M.; Some recent advances in forecasting and control; J. R. Stat. Soc. C Appl. Stat.: 1968; Volume 17 ,91-109.
[45] Wei, W.W.S.; ; Time Series Analysis, Univariate and Multivariate Methods: Boston, MA, USA 2006; .
[46] Thissen, U.; Van Brakel, R.; De Weijer, A.P.; Melssena, W.J.; Buydens, L.M.C.; Using support vector machines for time series prediction; Chemom. Intell. Lab.: 2003; Volume 69 ,35-49.
[47] Zhang, G.P.; An investigation of neural networks for linear time-series forecasting; Comput. Oper. Res.: 2001; Volume 28 ,1183-1202. · Zbl 0980.91065
[48] Lawrimore, J.H.; Menne, M.J.; Gleason, B.E.; Williams, C.N.; Wuertz, D.B.; Vose, R.S.; Rennie, J.; An overview of the Global Historical Climatology Network monthly mean temperature data set, version 3; J. Geophys. Res.: 2011; Volume 116 .
[49] Assimakopoulos, V.; Nikolopoulos, K.; The theta model: A decomposition approach to forecasting; Int. J. Forecast.: 2000; Volume 16 ,521-530.
[50] Kuhn, M.; Building predictive models in R using the caret package; J. Stat. Softw.: 2008; Volume 28 .
[51] Kuhn, M.; Wing, J.; Weston, S.; Williams, A.; Keefer, C.; Engelhardt, A.; Cooper, T.; Mayer, Z.; Kenkel, B.; ; Caret: Classification and Regression Training: 2017; .
[52] ; R: A Language and Environment for Statistical Computing: Vienna, Austria 2017; .
[53] Hemelrijk, J.; Underlining random variables; Stat. Neerl.: 1966; Volume 20 ,1-7.
[54] Fraley, C.; Leisch, F.; Maechler, M.; Reisen, V.; Lemonte, A.; ; Fracdiff: Fractionally Differenced ARIMA aka ARFIMA(p,d,q) Models: 2012; .
[55] Hyndman, R.J.; O’Hara-Wild, M.; Bergmeir, C.; Razbash, S.; Wang, E.; ; Forecast: Forecasting Functions for Time Series and Linear Models: 2017; .
[56] Hyndman, R.J.; Khandakar, Y.; Automatic time series forecasting: The forecast package for R; J. Stat. Softw.: 2008; Volume 27 .
[57] Hyndman, R.J.; Athanasopoulos, G.; ; Forecasting: Principles and Practice: Melbourne, Australia 2013; .
[58] Hyndman, R.J.; Billah, B.; Unmasking the Theta method; Int. J. Forecast.: 2003; Volume 19 ,287-290.
[59] Hyndman, R.J.; Koehler, A.B.; Ord, J.K.; Snyder, R.D.; ; Forecasting with Exponential Smoothing: The State Space Approach: Berlin/Heidelberg, Germany 2008; . · Zbl 1211.62165
[60] Liaw, A.; Wiener, M.; Classification and Regression by randomForest; R News: 2002; Volume 2 ,18-22.
[61] Cortez, P.; Data mining with neural networks and support vector machines using the R/rminer tool; Advances in Data Mining. Applications and Theoretical Aspects (Lecture Notes in Artificial Intelligence): Berlin/Heidelberg, Germany 2010; Volume Volume 6171 ,572-583.
[62] Cortez, P.; ; Rminer: Data Mining Classification and Regression Methods: 2016; .
[63] Hyndman, R.J.; Koehler, A.B.; Another look at measures of forecast accuracy; Int. J. Forecast.: 2006; Volume 22 ,679-688.
[64] Alexander, D.L.J.; Tropsha, A.; Winkler, D.A.; Beware of R2: Simple, unambiguous assessment of the prediction accuracy of QSAR and QSPR models; J. Chem. Inf. Model.: 2015; Volume 55 ,1316-1322.
[65] Gramatica, P.; Sangion, A.; A historical excursus on the statistical validation parameters for QSAR models: A clarification concerning metrics and terminology; J. Chem. Inf. Model.: 2016; Volume 56 ,1127-1131.
[66] Warnes, G.R.; Bolker, B.; Gorjanc, G.; Grothendieck, G.; Korosec, A.; Lumley, T.; MacQueen, D.; Magnusson, A.; Rogers, J.; ; Gdata: Various R Programming Tools for Data Manipulation: 2017; .
[67] Wickham, H.; ; Ggplot2: Elegant Graphics for Data Analysis: Cham, Switzerland 2016; . · Zbl 1397.62006
[68] Wickham, H.; Hester, J.; Francois, R.; Jylänki, J.; Jørgensen, M.; ; Readr: Read Rectangular Text Data: 2017; .
[69] Wickham, H.; Reshaping data with the reshape package; J. Stat. Softw.: 2007; Volume 21 .
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.