Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-22T21:50:05.837Z Has data issue: false hasContentIssue false

The Diversity of Model Tuning Practices in Climate Science

Published online by Cambridge University Press:  01 January 2022

Abstract

Many examples of calibration in climate science raise no alarms regarding model reliability. We examine one example and show that, in employing classical hypothesis testing, it involves calibrating a base model against data that are also used to confirm the model. This is counter to the ‘intuitive position’ (in favor of use novelty and against double counting). We argue, however, that aspects of the intuitive position are upheld by some methods, in particular, the general cross-validation method. How cross-validation relates to other prominent classical methods such as the Akaike information criterion and Bayesian information criterion is also discussed.

Type
Evidence for Climate Policy
Copyright
Copyright © The Philosophy of Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

We are grateful for valuable discussion and suggestions from the anonymous referees as well as to the audiences at the PSA 2014, the 2015 Philosophy of Climate Science Conference at the University of Pittsburgh, the Theory Construction in Science Conference at the London School of Economics, the Philosophy Colloquium at the University of Groningen, the Philosophy of Science Seminar at Bristol University, the Colloquium in Mathematical Philosophy at the Munich Center for Mathematical Philosophy, the British Society for the Philosophy of Science Seminar, the 2014 Trends in Logic Workshop at Ghent University, and the Third Reasoning Club Conference at the University of Kent. Funding support for the research was provided by the Arts and Humanities Research Council (AH/J006033/1) and by the ESRC Centre for Climate Change Economics and Policy, funded by the Economic and Social Research Council (ES/K006576/1 to Charlotte Werndl). Katie Steele was also supported by a 3-month research fellowship in residence at the Swedish Collegium for Advanced Study.

References

Arlot, Sylvain, and Celisse, Alain. 2010. “A Survey of Cross-Validation Procedures for Model Selection.” Statistics Surveys 4:4079.CrossRefGoogle Scholar
Burnham, Kenneth P., and Anderson, David R. 1998. Model Selection and Multimodal Inference. Berlin: Springer.CrossRefGoogle Scholar
Elsner, James B., and Schwertmann, Carl P.. 1994. “Assessing Forecast Skill through Cross-Validation.” Weather and Forecasting 9:619–24.2.0.CO;2>CrossRefGoogle Scholar
Flato, G., Marotzke, J., Abiodun, B., Braconnot, P., Chou, S. C., Collins, W., Cox, P., Driouech, F., Emori, S., Eyring, V., Forest, C., Gleckler, P., Guilyardi, E., Jakob, C., Kattsov, V., Reason, C., and Rummukainen, M.. 2013. “Evaluation of Climate Models.” In Climate Change, 2013: The Physical Science Basis; Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, ed. Stocker, T. F., Qin, D., Plattner, G.-K., Tignor, M., Allen, S. K., Boschung, J., Nauels, A., Xia, Y., Bex, V., and Midgley, P. M.. Cambridge: Cambridge University Press.Google Scholar
Frisch, Mathias. 2015. “Predictivism and Old Evidence: A Critical Look at Climate Model Tuning.” European Journal for Philosophy of Science 5 (2): 171–90.CrossRefGoogle Scholar
Linhart, H., and Zucchini, Walter. 1986. Model Selection. Wiley Series in Probability and Statistics. New York: Wiley.Google Scholar
Michaelsen, Joel. 1987. “Cross-Validation in Statistical Climate Forecast Models.” Journal of Climate and Applied Meteorology 26:15891600.2.0.CO;2>CrossRefGoogle Scholar
Romeijn, Jan-Willem, van der Schoot, Rens, and Hoijtink, Herbert. 2012. “One Size Does Not Fit All: Derivation of a Prior-Adapted BIC.” In Probabilities, Laws, and Structures, ed. Dieks, Dennis, Gonzales, Wenceslao, Hartmann, Stephan, Stadler, Fritz, Uebel, Thomas, and Weber, Marcel, 87106. Berlin: Springer.CrossRefGoogle Scholar
Schwarz, Gideon. 1978. “Estimating the Dimension of a Model.” Annals of Statistics 6:461–64.CrossRefGoogle Scholar
Sober, Elliott. 2008. Evidence and Evolution. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Sprenger, Jan. 2013. “The Role of Bayesian Philosophy within Bayesian Model Selection.” European Journal for the Philosophy of Science 3:101–14.CrossRefGoogle Scholar
Steele, Katie, and Werndl, Charlotte. 2013. “Climate Models, Confirmation and Calibration.” British Journal for the Philosophy of Science 64:609–35.CrossRefGoogle Scholar
Stone, Daithi A., Allen, Myles R., Selten, Frank, Kliphuis, Michael, and Stott, Peter A.. 2007. “The Detection and Attribution of Climate Change Using an Ensemble of Opportunity.” Journal of Climate 20:504–16.CrossRefGoogle Scholar
Stone, M. 1977. “An Asymptotic Equivalence of Choice of Model by Cross-Validation and Akaike’s Criterion.” Journal of the Royal Statistical Society B 39 (1): 4447.Google Scholar
Worrall, John. 2010. “Error, Tests, and Theory Confirmation.” In Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science, ed. Mayo, Deborah G. and Spanos, Aris, 125–54. Cambridge: Cambridge University Press.Google Scholar
Zucchini, Walter. 2000. “An Introduction to Model Selection.” Journal of Mathematical Psychology 44:4161.CrossRefGoogle ScholarPubMed