Book contents
- Frontmatter
- Contents
- List of figures
- List of tables
- Preface
- Common acronyms
- 1 An introduction to forecasting
- 2 First principles
- 3 Evaluating forecast accuracy
- 4 Forecasting in univariate processes
- 5 Monte Carlo techniques
- 6 Forecasting in cointegrated systems
- 7 Forecasting with large-scale macroeconometric models
- 8 A theory of intercept corrections: beyond mechanistic forecasts
- 9 Forecasting using leading indicators
- 10 Combining forecasts
- 11 Multi-step estimation
- 12 Parsimony
- 13 Testing forecast accuracy
- 14 Postscript
- Glossary
- References
- Author index
- Subject index
3 - Evaluating forecast accuracy
Published online by Cambridge University Press: 02 November 2009
- Frontmatter
- Contents
- List of figures
- List of tables
- Preface
- Common acronyms
- 1 An introduction to forecasting
- 2 First principles
- 3 Evaluating forecast accuracy
- 4 Forecasting in univariate processes
- 5 Monte Carlo techniques
- 6 Forecasting in cointegrated systems
- 7 Forecasting with large-scale macroeconometric models
- 8 A theory of intercept corrections: beyond mechanistic forecasts
- 9 Forecasting using leading indicators
- 10 Combining forecasts
- 11 Multi-step estimation
- 12 Parsimony
- 13 Testing forecast accuracy
- 14 Postscript
- Glossary
- References
- Author index
- Subject index
Summary
This chapter considers the evaluation of forecasts. It begins with the evaluation of forecast performance based on the efficient use of information, for both rolling-event and fixed-event forecasts. Notions of efficiency and unbiasedness are viewed as desirable properties for forecasts to possess, but, in practice, comparisons between forecasts produced by rival models would seem to be a preferable alternative to evaluation based on a comparison of individual-model forecasts with outcomes. This requires a metric for assessing forecast accuracy. Although contextspecific cost functions defining mappings between forecast errors and the costs of making those errors may exist in some instances, general MSFE-based measures have been the dominant criteria for assessing forecast accuracy in macroeconomic forecasting. However, such measures typically are not invariant to non-singular, scale-preserving linear transforms, even though linear models are. Consequently, different rankings across models or methods can be obtained from various MSFE measures by choosing alternative yet isomorphic representations of a given model. Thus, MSFE rankings can be an artefact of the linear transformation selected. A generalized forecast-error second-moment criterion is proposed with the property of invariance, but interesting problems relating to model choice and the forecast horizon remain.
- Type
- Chapter
- Information
- Forecasting Economic Time Series , pp. 52 - 78Publisher: Cambridge University PressPrint publication year: 1998