Book contents
- Frontmatter
- Contents
- Contributors
- Preface
- Acknowledgments
- 1 Pure Premium Modeling Using Generalized Linear Models
- 2 Applying Generalized Linear Models to Insurance Data: Frequency/Severity versus Pure Premium Modeling
- 3 Generalized Linear Models as Predictive Claim Models
- 4 Frameworks for General Insurance Ratemaking: Beyond the Generalized Linear Model
- 5 Using Multilevel Modeling for Group Health Insurance Ratemaking: A Case Study from the Egyptian Market
- 6 Clustering in General Insurance Pricing
- 7 Application of Two Unsupervised Learning Techniques to Questionable Claims: PRIDIT and Random Forest
- 8 The Predictive Distribution of Loss Reserve Estimates over a Finite Time Horizon
- 9 Finite Mixture Model and Workers’ Compensation Large-Loss Regression Mixture Model and Workers’ Compensation Large-Loss Regression Analysis
- 10 A Framework for Managing Claim Escalation Using Predictive Modeling
- 11 Predictive Modeling for Usage-Based Auto Insurance
- Index
- References
9 - Finite Mixture Model and Workers’ Compensation Large-Loss Regression Mixture Model and Workers’ Compensation Large-Loss Regression Analysis
Published online by Cambridge University Press: 05 August 2016
- Frontmatter
- Contents
- Contributors
- Preface
- Acknowledgments
- 1 Pure Premium Modeling Using Generalized Linear Models
- 2 Applying Generalized Linear Models to Insurance Data: Frequency/Severity versus Pure Premium Modeling
- 3 Generalized Linear Models as Predictive Claim Models
- 4 Frameworks for General Insurance Ratemaking: Beyond the Generalized Linear Model
- 5 Using Multilevel Modeling for Group Health Insurance Ratemaking: A Case Study from the Egyptian Market
- 6 Clustering in General Insurance Pricing
- 7 Application of Two Unsupervised Learning Techniques to Questionable Claims: PRIDIT and Random Forest
- 8 The Predictive Distribution of Loss Reserve Estimates over a Finite Time Horizon
- 9 Finite Mixture Model and Workers’ Compensation Large-Loss Regression Mixture Model and Workers’ Compensation Large-Loss Regression Analysis
- 10 A Framework for Managing Claim Escalation Using Predictive Modeling
- 11 Predictive Modeling for Usage-Based Auto Insurance
- Index
- References
Summary
Chapter Preview. Actuaries have been studying loss distributions since the emergence of the profession. Numerous studies have found that the widely used distributions, such as lognormal, Pareto, and gamma, do not fit insurance data well. Mixture distributions have gained popularity in recent years because of their flexibility in representing insurance losses from various sizes of claims, especially on the right tail. To incorporate the mixture distributions into the framework of popular generalized linear models (GLMs), the authors propose to use finite mixture models (FMMs) to analyze insurance loss data. The regression approach enhances the traditional whole-book distribution analysis by capturing the impact of individual explanatory variables. FMM improves the standard GLM by addressing distribution-related problems, such as heteroskedasticity, over- and underdispersion, unobserved heterogeneity, and fat tails. A case study with applications on claims triage and on high-deductible pricing using workers’ compensation data illustrates those benefits.
Introduction
Conventional Large Loss Distribution Analysis
Large loss distributions have been extensively studied because of their importance in actuarial applications such as increased limit factor and excess loss pricing (Miccolis, 1977), reinsurance retention and layer analysis (Clark, 1996), high deductible pricing (Teng, 1994), and enterprise risk management (Wang, 2002). Klugman et al. (1998) discussed the frequency, severity, and aggregate loss distributions in detail in their book, which has been on the Casualty Actuarial Society syllabus of exam Construction and Evaluation of Actuarial Models for many years. Keatinge (1999) demonstrated that popular single distributions, including those in Klugman et al. (1998), are not adequate to represent the insurance loss well and suggested using mixture exponential distributions to improve the goodness of fit. Beirlant et al. (2001) proposed a flexible generalized Burr-gamma distribution to address the heavy tail of loss and validated the effectiveness of this parametric distribution by comparing its implied excess-of-loss reinsurance premium with other nonparametric and semi-parametric distributions. Matthys et al. (2004) presented an extreme quantile estimator to deal with extreme insurance losses. Fleming (2008) showed that the sample average of any small data from a skewed population is most likely below the true mean and warned the danger of insurance pricing decisions without considering extreme events. Henry and Hsieh (2009) stressed the importance of understanding the heavy tail behavior of a loss distribution and developed a tail index estimator assuming that the insurance loss possess Pareto-type tails.
- Type
- Chapter
- Information
- Predictive Modeling Applications in Actuarial Science , pp. 224 - 260Publisher: Cambridge University PressPrint publication year: 2016