Hostname: page-component-78c5997874-8bhkd Total loading time: 0 Render date: 2024-11-04T21:04:52.176Z Has data issue: false hasContentIssue false

ROBUST FORECAST COMPARISON

Published online by Cambridge University Press:  27 October 2016

Sainan Jin*
Affiliation:
Singapore Management University
Valentina Corradi
Affiliation:
University of Surrey
Norman R. Swanson
Affiliation:
Rutgers University
*
*Address correspondence to Sainan Jin, School of Economics, Singapore Management University, 90 Stamford Road, Singapore 178903; e-mail: [email protected].

Abstract

Forecast accuracy is typically measured in terms of a given loss function. However, as a consequence of the use of misspecified models in multiple model comparisons, relative forecast rankings are loss function dependent. In order to address this issue, a novel criterion for forecast evaluation that utilizes the entire distribution of forecast errors is introduced. In particular, we introduce the concepts of general-loss (GL) forecast superiority and convex-loss (CL) forecast superiority; and we develop tests for GL (CL) superiority that are based on an out-of-sample generalization of the tests introduced by Linton, Maasoumi, and Whang (2005, Review of Economic Studies 72, 735–765). Our test statistics are characterized by nonstandard limiting distributions, under the null, necessitating the use of resampling procedures to obtain critical values. Additionally, the tests are consistent and have nontrivial local power, under a sequence of local alternatives. The above theory is developed for the stationary case, as well as for the case of heterogeneity that is induced by distributional change over time. Monte Carlo simulations suggest that the tests perform reasonably well in finite samples, and an application in which we examine exchange rate data indicates that our tests can help identify superior forecasting models, regardless of loss function.

Type
ARTICLES
Copyright
Copyright © Cambridge University Press 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

We are grateful to the editor, Peter C. B. Phillips, the co-editor, Robert Taylor, two referees, Xu Cheng, Frank Diebold, Jesus Gonzalo, Simon Lee, Federico Martellosio, Chris Martin, Vito Polito, Barbara Rossi, Olivier Scaillet, Minchul Shin, Tang Srisuma, Liangjun Su; and to seminar participants at the University of Surrey, the University of Bath, the 2015 Princeton/QUT/SJTU/SMU Conference on the Frontier in Econometrics, the 2015 UK Econometric Study Group, 4th International Conference in Applied Econometrics, Universita’ Milano Bicocca, and the 2016 Winter Econometric Society Winter Meeting for useful comments and suggestions. For research support, Jin thanks the Singapore Ministry of Education for Academic Research Fund under grant number MOE2012-T2-2-021.

References

REFERENCES

Andrews, D.W.K. & Guggenberger, P. (2010) Asymptotic size and a problem with subsampling and with the m out of n bootstrap. Econometric Theory 26, 426468.CrossRefGoogle Scholar
Chiang, T. (1986) Empirical analysis of the predictor of future spot rates. Journal of Financial Research 9, 153162.CrossRefGoogle Scholar
Corradi, V., Jin, S., & Swanson, N.R. (2016) Improved Tests for Robust Forecast Comparison. Mimeo, University of Surrey.Google Scholar
Corradi, V. & Swanson, N.R. (2005) A test for comparing multiple misspecified conditional interval models. Econometric Theory 21, 9911016.CrossRefGoogle Scholar
Corradi, V. & Swanson, N.R. (2006) Predictive density evaluation. In Elliot, G., Granger, C.W.J., & Timmermann, A. (eds.), Handbook of Economic Forecasting, vol. 1, pp. 194284. Elsevier.Google Scholar
Corradi, V. & Swanson, N.R. (2007) Nonparametric bootstrap procedures for predictive inference based on recursive estimation schemes. International Economic Review 48, 67109.CrossRefGoogle Scholar
Corradi, V. & Swanson, N.R. (2013) A survey of recent advances in forecast accuracy comparison testing, with an extension to stochastic dominance. In Chen, X. & Swanson, N. (eds.), Causality, Prediction, and Specification Analysis: Recent Advances and Future Directions, Essays in honor of Halbert L. White, Jr, pp. 121143. Springer.Google Scholar
Diebold, F.X. & Mariano, R.S. (1995) Comparing predictive accuracy. Journal of Business and Economic Statistics 13, 253263.Google Scholar
Diebold, F.X. & Shin, M. (2015) Assessing point forecast accuracy by stochastic loss distance. Economics Letters 130, 3738.CrossRefGoogle Scholar
Elliott, G. & Timmermann, A. (2004) Optimal forecast combinations under general loss functions and forecast error distributions. Journal of Econometrics 122, 4779.CrossRefGoogle Scholar
Giacomini, R. & White, H. (2006) Tests of conditional predictive ability. Econometrica 74, 15451578.CrossRefGoogle Scholar
Goncalves, S. & White, H. (2004) Maximum likelihood and the bootstrap for nonlinear dynamic models. Journal of Econometrics 119, 199220.CrossRefGoogle Scholar
Granger, C.W.J. (1999) Outline of forecast theory using generalized cost functions. Spanish Economic Review 1, 161173.CrossRefGoogle Scholar
Hall, P., Horowitz, J., & Jing, B.Y. (1995) On blocking rules for the bootstrap with dependent data. Biometrika 82, 561574.CrossRefGoogle Scholar
Hansen, B.E. (1996) Stochastic equicontinuity for unbounded dependent heterogeneous arrays. Econometric Theory 12, 347359.CrossRefGoogle Scholar
Hansen, P.R. (2005) A test for superior predictive ability. Journal of Business and Economic Statistics 23, 365380.CrossRefGoogle Scholar
Harel, M. & Puri, M.L. (1999) Conditional empirical processes defined by nonstationary absolutely regular sequences. Journal of Multivariate Analysis 70, 250285.CrossRefGoogle Scholar
Herrndorf, N. (1984) An invariance principle for weakly dependent sequences of random variables. Annals of Probability 12, 141153.Google Scholar
Holm, S. (1979) A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 6570.Google Scholar
Hunter, W.C. & Timme, S.G. (1992) A stochastic dominance approach to evaluating foreign exchange hedging strategies. Financial Management 21, 104112.CrossRefGoogle Scholar
Klecan, L., McFadden, R., & McFadden, D. (1991) A Robust Test for Stochastic Dominance. Working paper, Department of Economics, MIT.Google Scholar
Linton, O., Maasoumi, E., & Whang, Y.J. (2005) Consistent testing for stochastic dominance: A subsampling approach. Review of Economic Studies 72, 735765.CrossRefGoogle Scholar
Linton, O., Song, K., & Whang, Y.J. (2010) An improved bootstrap test of stochastic dominance. Journal of Econometrics 154, 186202.Google Scholar
McCracken, M.W. (2000) Robust out-of-sample inference. Journal of Econometrics 99, 195223.CrossRefGoogle Scholar
Meese, R. & Rogoff, K. (1983) Empirical exchange rate models of the seventies: Do they fit out of sample? Journal of International Economics 14, 324.CrossRefGoogle Scholar
Politis, D.N. & Romano, J.P. (1994) Limit theorems for weakly dependent Hilbert space valued random variables with application to the stationary bootstrap. Statistica Sinica 4, 461476.Google Scholar
Pollard, D. (1990) Empirical Processes: Theory and Applications. CBMS Conference Series in Probability and Statistic, vol. 2. Institute of Mathematical Statistics.Google Scholar
West, K.D. (1996) Asymptotic inference about predictive ability. Econometrica 64, 10671084.Google Scholar
White, H. (2000) A reality check for data snooping. Econometrica 68, 10971126.CrossRefGoogle Scholar
Supplementary material: File

Jin et al supplementary material

Jin et al supplementary material 1

Download Jin et al supplementary material(File)
File 64.1 KB
Supplementary material: PDF

Jin et al supplementary material

Jin et al supplementary material 2

Download Jin et al supplementary material(PDF)
PDF 169 KB