Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Acknowledgements
- 1 Introduction
- Part I Background and Fundamentals
- 2 Regional Climate
- 3 History of Downscaling
- 4 Rationale of Downscaling
- 5 User Needs
- 6 Mathematical and Statistical Methods
- 7 Reference Observations
- 8 Climate Modelling
- 9 Uncertainties
- Part II Statistical Downscaling Concepts and Methods
- Part III Downscaling in Practice and Outlook
- Appendix A Methods Used in This Book
- Appendix B Useful Resources
- References
- Index
3 - History of Downscaling
from Part I - Background and Fundamentals
Published online by Cambridge University Press: 27 December 2017
- Frontmatter
- Dedication
- Contents
- Preface
- Acknowledgements
- 1 Introduction
- Part I Background and Fundamentals
- 2 Regional Climate
- 3 History of Downscaling
- 4 Rationale of Downscaling
- 5 User Needs
- 6 Mathematical and Statistical Methods
- 7 Reference Observations
- 8 Climate Modelling
- 9 Uncertainties
- Part II Statistical Downscaling Concepts and Methods
- Part III Downscaling in Practice and Outlook
- Appendix A Methods Used in This Book
- Appendix B Useful Resources
- References
- Index
Summary
Downscaling in Weather Forecasting
The first downscaling methods had been invented already in the late 1940s (Klein 1948) and became operational in the early days of numerical weather prediction at the end of the 1950s. Back then, operational numerical weather prediction models were by far too coarse to predict local weather, and furthermore they did not forecast all variables of interest but only a few such as pressure and temperature. At that time a considerable network of observed weather time series was available already. Klein et al. (1959) employed this data to infer statistical relationships between the observed large-scale circulation – for those variables that were simulated by the models – and the observed local-scale weather variables of interest. The statistical model was then applied to downscale the actual numerical forecast of the large-scale circulation to a forecast of the local weather. The key assumption of this approach is that the large-scale predictor has been perfectly forecasted by the numerical model, hence the approach itself has been coined perfect prognosis (PP). After some years, a considerable database of past forecasts had been archived. Analyses of this data revealed that numerical forecasts even of the largescale weather were of course not perfect but showed systematic deviations compared to observations. Yet this database also became key to mitigate this problem: Glahn and Lowry (1972) developed a new approach that – during calibration – did not take the predictors from observations but from the archived numerical forecasts. For a new weather prediction, the inferred statistical link is then applied to the new numerical forecast. As this approach is basically a post-processing of numerical model data, it has been coined model output statistics (MOS). The key advantage of MOS is that it contains by construction a bias correction of the numerical model. Current weather prediction systems employ complex MOS approaches with several predictors that are continually recalibrated to provide the highest predictive skill.
In parallel, numerical approaches were developed to improve the resolution and accuracy of forecasts over a target region. The first limited-area model was developed at the US National Meteorological Center (Howcroft 1966, Gerrity and McPherson 1969) and became operational in 1971. This model covered the US, Canada and the Arctic Ocean at a horizontal resolution of 190.5km at 60◦N and was driven at the lateral boundaries with input from a Northern Hemisphere numerical weather prediction model.
- Type
- Chapter
- Information
- Publisher: Cambridge University PressPrint publication year: 2018