Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Acknowledgements
- 1 Introduction
- Part I Background and Fundamentals
- Part II Statistical Downscaling Concepts and Methods
- Part III Downscaling in Practice and Outlook
- 15 Evaluation
- 16 Performance of Statistical Downscaling
- 17 A Regional Modelling Debate
- 18 Use of Downscaling in Practice
- 19 Outlook
- Appendix A Methods Used in This Book
- Appendix B Useful Resources
- References
- Index
15 - Evaluation
from Part III - Downscaling in Practice and Outlook
Published online by Cambridge University Press: 27 December 2017
- Frontmatter
- Dedication
- Contents
- Preface
- Acknowledgements
- 1 Introduction
- Part I Background and Fundamentals
- Part II Statistical Downscaling Concepts and Methods
- Part III Downscaling in Practice and Outlook
- 15 Evaluation
- 16 Performance of Statistical Downscaling
- 17 A Regional Modelling Debate
- 18 Use of Downscaling in Practice
- 19 Outlook
- Appendix A Methods Used in This Book
- Appendix B Useful Resources
- References
- Index
Summary
As discussed in Chapter 5, two key information requirements for users of climate information are credibility and salience. A proper evaluation is key to establish credibility of regional (in fact: any) climate projections: by analysing the realism of the chosen, potentially statistically post-processe, climate model simulations and by assessing the credibility of regional future projections. A proper evaluation has to be designed in a way to provide salient information: statistical aspects need to be evaluated that are relevant to users.
Barsugli et al. (2013) and Hewitson et al. (2014) highlight the practitioner's dilemma: users of downscaled information are faced with a plethora of different regional climate projections, based on different GCMs, downscaling methods and approaches, realisations and forcings, with widely varying and often contradictory results. Key to a sensible evaluation is thus a common framework to be able to trace differences between the individual simulations.
For global climate models, the CMIP framework (Meehl et al. 2007a, Taylor et al. 2012) provides the basis for broad intercomparison studies. Regional climate models have been intercompared within the PIRCS (Takle et al. 1999), PRUDENCE (Christensen and Christensen 2007), ENSEMBLES (van der Linden and Mitchell 2009), NARCCAP (Mearns et al. 2009) and most recently the CORDEX initiatives (Giorgi et al. 2009). The first broad intercomparison of statistical downscaling methods was carried out within the European STARDEX project (Goodess et al. 2010). Similar projects have been carried out for Australia (Frost et al. 2011), China (Hu et al. 2013) and the US (Gutmann et al. 2014). Recently, the VALUE network carried out the most comprehensive intercomparison of statistical downscaling and bias correction methods for European climate (Maraun et al. 2015). Ongoing activities are the BCIP intercomparison of bias correction methods, and the CORDEX-ESD evaluation for South Africa and the La Plata basin in South America. Guidelines for the evaluation of regional climate projections have been published by Hewitson et al. (2014) and Maraun et al. (2015).
- Type
- Chapter
- Information
- Statistical Downscaling and Bias Correction for Climate Research , pp. 227 - 241Publisher: Cambridge University PressPrint publication year: 2018