Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Introduction
- Part I Fundamental concepts
- Part II Code verification
- Part III Solution verification
- Part IV Model validation and prediction
- 10 Model validation fundamentals
- 11 Design and execution of validation experiments
- 12 Model accuracy assessment
- 13 Predictive capability
- Part V Planning, management, and implementation issues
- Appendix Programming practices
- Index
- Plate Section
- References
12 - Model accuracy assessment
from Part IV - Model validation and prediction
Published online by Cambridge University Press: 05 March 2013
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Introduction
- Part I Fundamental concepts
- Part II Code verification
- Part III Solution verification
- Part IV Model validation and prediction
- 10 Model validation fundamentals
- 11 Design and execution of validation experiments
- 12 Model accuracy assessment
- 13 Predictive capability
- Part V Planning, management, and implementation issues
- Appendix Programming practices
- Index
- Plate Section
- References
Summary
As has been discussed in a number of chapters, particularly Chapter 10, Model validation fundamentals, and Chapter 11, Design and execution of validation experiments, model accuracy assessment is the core issue of model validation. Our intent in model accuracy assessment is to critically and quantitatively determine the ability of a mathematical model and its embodiment in a computer code to simulate a well-characterized physical process. We, of course, are only interested in well-characterized physical processes that are useful for model validation. How critical and quantitative the model accuracy assessment is will depend on (a) how extensive the experimental data set is in exploring the important model input quantities that affect the system response quantities (SRQs) of interest; (b) how well characterized the important model input quantities are, based on measurements in the experiments; (c) how well characterized the experimental measurements and the model predictions of the SRQs of interest are; (d) whether the experimental measurements of the SRQs were available to the computational analyst before the model accuracy assessment was conducted; and (e) if the SRQs were available to the computational analysts, whether they were used for model updating or model calibration. This chapter will explore these difficult issues both conceptually and quantitatively.
We begin the chapter by discussing the fundamental elements of model accuracy assessment. As part of this discussion, we review traditional and recent methods for comparing model results and experimental measurements, and we explore the relationship between model accuracy assessment, model calibration, and model prediction. Beginning with the engineering society definitions of terms given in Chapter 2, Fundamental concepts and terminology, the perspective of this book is to segregate, as well as possible, each of these activities. There is, however, an alternative perspective in the published literature that believes all of these activities should be combined. We briefly review this alternative perspective and the associated approaches, and contrast these with approaches that segregate these activities.
- Type
- Chapter
- Information
- Verification and Validation in Scientific Computing , pp. 469 - 554Publisher: Cambridge University PressPrint publication year: 2010
References
- 2
- Cited by