The role of the information set for forecasting – with applications to risk management. (English) Zbl 1454.62277

Summary: Predictions are issued on the basis of certain information. If the forecasting mechanisms are correctly specified, a larger amount of available information should lead to better forecasts. For point forecasts, we show how the effect of increasing the information set can be quantified by using strictly consistent scoring functions, where it results in smaller average scores. Further, we show that the classical Diebold-Mariano test, based on strictly consistent scoring functions and asymptotically ideal forecasts, is a consistent test for the effect of an increase in a sequence of information sets on \(h\)-step point forecasts. For the value at risk (VaR), we show that the average score, which corresponds to the average quantile risk, directly relates to the expected shortfall. Thus, increasing the information set will result in VaR forecasts which lead on average to smaller expected shortfalls. We illustrate our results in simulations and applications to stock returns for unconditional versus conditional risk management as well as univariate modeling of portfolio returns versus multivariate modeling of individual risk factors. The role of the information set for evaluating probabilistic forecasts by using strictly proper scoring rules is also discussed.


62M20 Inference from stochastic processes and prediction
62M07 Non-Markovian processes: hypothesis testing
62P05 Applications of statistics to actuarial sciences and financial mathematics
91G70 Statistical methods; risk measures


Full Text: DOI arXiv Euclid


[1] Acerbi, C. and Tasche, D. (2002). On the coherence of expected shortfall. J. Banking Finance 26 1487-1503.
[2] Bao, Y., Lee, T.-H. and Saltoğlu, B. (2006). Evaluating predictive performance of value-at-risk models in emerging markets: A reality check. J. Forecast. 25 101-128.
[3] Berkowitz, J., Christoffersen, P. F. and Pelletier, D. (2011). Evaluating value-at-risk models with desk-level data. Management Science 57 2213-2227.
[4] Bröcker, J. (2009). Reliability, sufficiency, and the decomposition of proper scores. Q. J. Roy. Meteor. Soc. 135 1512-1519.
[5] Christoffersen, P. F. (1998). Evaluating interval forecasts. Internat. Econom. Rev. 39 841-862.
[6] Christoffersen, P. F. (2009). Value-at-risk models. In Handbook of Financial Time Series (T. Mikosch, J. P. Kreiß, R. A. Davis and T. G. Andersen, eds.) 753-766. Springer, Berlin. · Zbl 1178.91075
[7] DeGroot, M. H. and Fienberg, S. E. (1983). The comparison and evaluation of forecasters. J. Roy. Stat. Soc. Ser. D ( The Statistician ) 32 12-22.
[8] Diebold, F. X. (2012). Comparing predictive accuracy, twenty years later: A personal perspective on the use and abuse of Diebold-Mariano tests. Working Paper No. 18391, NBER.
[9] Diebold, F. X. and Mariano, R. S. (1995). Comparing predictive accuracy. J. Bus. Econom. Statist. 13 253-263.
[10] Durrett, R. (2005). Probability : Theory and Examples , 3rd ed. Thomson Brooks/Cole, Belmont, CA. · Zbl 1202.60002
[11] Engle, R. (2002). Dynamic conditional correlation: A simple class of multivariate generalized autoregressive conditional heteroskedasticity models. J. Bus. Econom. Statist. 20 339-350.
[12] Escanciano, J. C. and Olmo, J. (2011). Robust backtesting tests for value-at-risk models. J. Financ. Economet. 9 132-161.
[13] Giacomini, R. and White, H. (2006). Tests of conditional predictive ability. Econometrica 74 1545-1578. · Zbl 1187.91151
[14] Gneiting, T. (2011). Making and evaluating point forecasts. J. Amer. Statist. Assoc. 106 746-762. · Zbl 1232.62028
[15] Gneiting, T., Balabdaoui, F. and Raftery, A. E. (2007). Probabilistic forecasts, calibration and sharpness. J. R. Stat. Soc. Ser. B Stat. Methodol. 69 243-268. · Zbl 1120.62074
[16] Gneiting, T. and Raftery, A. E. (2007). Strictly proper scoring rules, prediction, and estimation. J. Amer. Statist. Assoc. 102 359-378. · Zbl 1284.62093
[17] Gneiting, T. and Ranjan, R. (2011). Comparing density forecasts using threshold- and quantile-weighted scoring rules. J. Bus. Econom. Statist. 29 411-422. · Zbl 1219.91108
[18] Heinrich, C. (2014). The mode functional is not elicitable. Biometrika . · Zbl 1400.62026
[19] Jorion, P. (2006). Value-at-Risk : The New Benchmark for Managing Financial Risk . McGraw Hill, New York.
[20] Klenke, A. (2008). Probability Theory : A Comprehensive Course . Springer London, London. · Zbl 1141.60001
[21] McNeil, A. J., Frey, R. and Embrechts, P. (2005). Quantitative Risk Management : Concepts , Techniques and Tools . Princeton Univ. Press, Princeton, NJ. · Zbl 1089.91037
[22] Mitchell, J. and Wallis, K. F. (2011). Evaluating density forecasts: Forecast combinations, model mixtures, calibration and sharpness. J. Appl. Econometrics 26 1023-1040.
[23] Newey, W. K. and West, K. D. (1987). A simple, positive semidefinite, heteroskedasticity and autocorrelation consistent covariance matrix. Econometrica 55 703-708. · Zbl 0658.62139
[24] Patton, A. J. and Timmermann, A. (2012). Forecast rationality tests based on multi-horizon bounds. J. Bus. Econom. Statist. 30 1-17.
[25] Rockafellar, R. T. and Uryasev, S. (2000). Optimization of conditional value-at-risk. J. Risk 2 21-41.
[26] Tsyplakov, A. (2011). Evaluating density forecasts: A comment. Paper No. 31233, MPRA. Available at .
[27] van der Vaart, A. W. and Wellner, J. A. (1996). Weak Convergence and Empirical Processes : With Applications to Statistics . Springer, New York. · Zbl 0862.60002
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.