Book contents
- Frontmatter
- Contents
- Preface
- Notation and convention
- Part I Loss models
- Part II Risk and ruin
- Part III Credibility
- Part IV Model construction and evaluation
- 10 Model estimation and types of data
- 11 Nonparametric model estimation
- 12 Parametric model estimation
- 13 Model evaluation and selection
- 14 Basic Monte Carlo methods
- 15 Applications of Monte Carlo methods
- Appendix: Review of statistics
- Answers to exercises
- References
- Index
13 - Model evaluation and selection
from Part IV - Model construction and evaluation
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- Preface
- Notation and convention
- Part I Loss models
- Part II Risk and ruin
- Part III Credibility
- Part IV Model construction and evaluation
- 10 Model estimation and types of data
- 11 Nonparametric model estimation
- 12 Parametric model estimation
- 13 Model evaluation and selection
- 14 Basic Monte Carlo methods
- 15 Applications of Monte Carlo methods
- Appendix: Review of statistics
- Answers to exercises
- References
- Index
Summary
After a model has been estimated, we have to evaluate it to ascertain that the assumptions applied are acceptable and supported by the data. This should be done prior to using the model for prediction and pricing. Model evaluation can be done using graphical methods, as well as formal misspecification tests and diagnostic checks.
Nonparametric methods have the advantage of using minimal assumptions and allowing the data to determine the model. However, they are more difficult to analyze theoretically. On the other hand, parametric methods are able to summarize the model in a small number of parameters, albeit with the danger of imposing the wrong structure and oversimplification. Using graphical comparison of the estimated df and pdf, we can often detect if the estimated parametric model has any abnormal deviation from the data.
Formal misspecification tests can be conducted to compare the estimated model (parametric or nonparametric) against a hypothesized model. When the key interest is the comparison of the df, we may use the Kolmogorov–Smirnov test and Anderson–Darling test. The chi-square goodness-of-fit test is an alternative for testing distributional assumptions, by comparing the observed frequencies against the theoretical frequencies. The likelihood ratio test is applicable to testing the validity of restrictions on a model, and can be used to decide if a model can be simplified.
When several estimated models pass most of the diagnostics, the adoption of a particular model may be decided using some information criteria, such as the Akaike information criterion or the Schwarz information criterion.
- Type
- Chapter
- Information
- Nonlife Actuarial ModelsTheory, Methods and Evaluation, pp. 380 - 399Publisher: Cambridge University PressPrint publication year: 2009