Hostname: page-component-586b7cd67f-t7czq Total loading time: 0 Render date: 2024-11-22T02:14:14.790Z Has data issue: false hasContentIssue false

Catastrophe insurance decision making when the science is uncertain

Published online by Cambridge University Press:  11 September 2024

Richard Bradley*
Affiliation:
Department of Philosophy, Logic and Scientific Method, London School of Economics and Political Science, Houghton Street, London WC2A 2AE, UK
Rights & Permissions [Opens in a new window]

Abstract

Insurers draw on sophisticated models for the probability distributions over losses associated with catastrophic events that are required to price insurance policies. But prevailing pricing methods don’t factor in the ambiguity around model-based projections that derive from the relative paucity of data about extreme events. I argue however that most current theories of decision making under ambiguity only partially support a solution to the challenge that insurance decision makers face and propose an alternative approach that allows for decision making that is responsive to both the evidential situation of the insurance decision maker and their attitude to ambiguity.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

1. Introduction

The impact of natural disasters on lives and livelihoods is significant and likely to become more so as the climate changes. Over the past 20 years, more than 1.25 million people have died as a direct result of natural disasters and close to 4 billion adversely affected (UNDDR 2020). Although direct deaths from natural catastrophes have declined over the last hundred years, economic losses have risen in line with GDP (Ritchie and Roser Reference Ritchie and Roser2024). Global losses quadrupled from $50 billion a year in the 1980s to $200 billion in the early 2010s, for instance, and in 2022 they stood at an estimated $343 billion, hurricane Ian alone having contribute in excess of $50 billion of insured losses (Aon 2024). Moreover, according to the World Bank report ‘Shock waves’, 75% of expected future losses associated with climate chance can be attributed to an increase in the frequency and/or severity of natural catastrophes (Hallegatte et al. Reference Hallegatte, Bangalore, Bonzanigo, Fay, Kane, Narloch, Rozenberg, Treguer and Vogt-Schilb2015). Even with large margins for errors, these are impressive figures.

Insurance and reinsurance are important components of any strategy for managing these impacts (alongside, of course, measures to improve resilience and reduce vulnerability and disaster relief planning). Above all they offer the possibility of an efficient and cost-effective redistribution of some of the risk away from those who are most vulnerable to natural hazards and onto those better positioned to absorb them, thereby indirectly enhancing the financial resilience of both individuals and organizations. Some have argued that they also serve to reduce moral hazard by creating incentives for risk-reducing investments and behaviours by the vulnerable and that parametric insurance in particular offers fast and cost-effective support for post-disaster recovery and reconstruction by providing rapid access to funds (Clarke and Dercon Reference Clarke and Dercon2016).

Achieving any of these benefits faces two significant challenges however. Firstly, the covariant nature of the catastrophic risks associated with natural hazards means that the amount of capital, and hence the associated opportunity costs, required to ensure solvency in the face of low probability but highly impactful events is very large (Powers Reference Powers2011). Secondly, much of the financial risk is associated with events about which the least is known, namely the rare, highly damaging ones. As a result, (re)insurers face considerable ambiguity around the rare events that matter most to them. Jointly these challenges push up the price of insurance, thereby undermining its usefulness as a mechanism for risk transfer. On the one hand, if insurance is correctly or over-priced then catastrophe insurance is rendered unaffordable for those who most need it (Charpentier Reference Charpentier2008); on the other, if insurance is subsidized or under-priced then there is a systemic risk of collapse of the insurance sector.

The character of the catastrophe insurance sector has been shaped by responses to these two challenges. To offset the covariant nature of catastrophe risks, insurance companies and public sector organizations have sought to transfer risk to reinsurers who hedge risks globally, across different perils in different regions. And to improve the accuracy of risk estimates, the sector (insurers, reinsurers and regulators) has increasingly turned to specialized catastrophe modelling companies to provide them with the projections that they need to make decisions (Shome et al. Reference Shome, Rahnama, Jewson, Wilson and Michel2018).Footnote 1 But although the use of cat models has greatly improved probabilistic projections of losses, doubts remain as to whether the models capture all relevant uncertainty. To the extent they don’t, (re)insurers continue to face ambiguity and standard techniques for settling such questions as what price to put on insurance cover, what capital reserves to hold and how to allocate capacity across different hazards and regions, cannot be applied.

The challenge presented to the catastrophe insurance sector by ambiguity and the importance of finding ways to manage it, makes insurance decision making an especially interesting test case for the many theories of rational decision making under ambiguity that are to be found in the current economics and philosophy literature. The literature contains a number of applications of decision rules for ambiguity to the question of optimal contracts, including Alary et al. (Reference Alary, Gollier and Treich2013), Gollier (Reference Gollier2014), Bernard et al. (Reference Bernard, He, Yan and Zhou2015), Jiang et al. (Reference Jiang, Escobar-Anel and Ren2020) and Birghila et al. (Reference Birghila, Boonen and Ghossoub2023). Despite this, the literature contains only one explicit application of a rule for ambiguity to insurance decision making, by Dietz and Walker (Reference Dietz and Walker2017). I will argue furthermore that prevailing theories only partially provide the resources needed to address the challenge, because they take as inputs factors that in fact need to be determined if a reasonable decision is to be made – in particular, the size of the set of projections that should serve as the basis for decisions. I will then build on recent work on confidence-based decision making (Hill Reference Hill2013, Reference Hill2019; Bradley Reference Bradley2017) and on how to embed models within it (Roussos et al. Reference Roussos, Bradley and Frigg2021; Bradley et al. Reference Bradley, Helgeson and Hill2017) to propose ways of settling the questions mentioned above regarding pricing and capital allocation.

The paper proceeds as follows. The next section briefly presents the standard methods for pricing catastrophe insurance and explains the challenge posed to them by the ambiguity in hazard projections. Section 3 evaluates current theories of decision making under ambiguity in the light of the challenge and section 4 applies its lessons to propose how questions regarding the size of capital holding, the pricing of cover and capacity allocation can be settled in a manner which reflects both the evidential situation of the insurance decision maker and their attitude to ambiguity.

2. Catastrophe Insurance: The Background

At its simplest insurers make money out of risk by charging premiums on policies protecting against occurrences of harmful events that are higher than the expected losses from such events. They are able to do so because by selling large numbers of policies they can pool risks that are too great for individual policyholders to bear. If the probability of a large hurricane striking in the next year at each of 100 sites is 5%, for instance, then by charging a customer at each site a premium equal to 10% of the loss to the insurer of a claim in the event of a hurricane, the insurer can expect an annual profit of 5 times the insured loss. So, while the individual customer may be bankrupted by a single catastrophic event, the insurer will only face ruin in the highly improbable circumstance in which a hurricane strikes a very large number of sites. All of this assumes of course that the probabilities of strikes at the different sites are not positively correlated. In practice things are a good deal more complicated because natural disasters such as hurricanes tend to affect large numbers of policyholders simultaneously. As a result, insurers against natural disasters need to hold a lot of capital in order to ensure that they stay solvent in the event of a major disaster and/or transfer some of their risk to other institutions such as reinsurers. But the principle remains the same: insurers and reinsurers can tolerate more risk than the insured because (and essentially only insofar as) they can exploit opportunities to hedge against it.

Standard theory treats insurers as attempting to maximize profit subject to a survival constraint (Stone Reference Stone1973). To spell out more formally what this entails, consider a state space consisting of all possible states of the world relevant to the performance of the insurer’s book.Footnote 2 The book can then be viewed simply as a mapping from each state to a monetary gain or loss, determined by the difference in that state between the premiums collected and the claims paid out plus other costs. To calculate an expected return on the book, the insurer draws on a probability measure P defined on a Boolean algebra of payoff-relevant events. For any book b, let us denote by $x$ the event of b paying out $x$ currency units to settle claims and let ${\mu _b}$ and ${\sigma _b}$ respectively be expected pay-out and standard deviation of this book. Now we can define an associated probability measure ${P_b}$ on the Borel σ-algebra of pay-out events by:

$${P_b}\left( x \right) = P\left( {{b^{ - 1}}\left( x \right)} \right)$$

Let the probability of the book b paying out more than x be denoted by ${P_b}(\gt x)$ , a measure known as the exceedance probability for the book b of the event x. Then standard theory says that the insurer will, given book b, set its capital holding ${Z_b}$ to:

$${Z_b} = {\rm{min}}\{ x{:}\ {P_b}( \gt x) \le \kappa \} $$

where $\kappa $ is a benchmark level that depends on the caution or conservatism of the insurer or regulator. Note that this threshold is a probability of survival and independent of the absolute losses and benefits at stake, something we return to later.

Now suppose that the insurer is considering whether to sell another contract c, a transaction that will leave her with a book b+c, where this book is defined by, for all states s, b+c(s) = b(s) + c(s). The sale will require an increase in capital holding from ${Z_b}$ to ${Z_{b + c}}$ , so if the new contract is competitively priced then the expected profit from it cannot be less than the opportunity cost of the additional capital – denoted by $y({Z_b}$ $ - $ ${Z_{b + c}}$ ) – required in order to mitigate the risk of ruin. Now the expected profit from the sale of the new contract is just the difference between its price and the expected losses associated with it: ${\mu _c} = {\mu _{b + c}} - {\mu _b}$ . So, it follows that:

(1) $${p_c} \ge{\mu _c} + y\left( {{Z_{b + c}} - {Z_b}} \right)$$

It is common in catastrophe reinsurance to set this price according to Kreps’s formula (Kreps Reference Kreps1990):

(2) $${p_c} = {\mu _c} + \iota .{\sigma _c}$$

Here ${\sigma _c}$ is the standard deviation of the new contract c and $\iota $ is the risk load on this contract that is determined by the difference in the standard deviations of the books b+c and b, the benchmark level $\kappa $ representing acceptable probability of ruin to the insurer and the opportunity cost of capital, $y$ . As such $\iota .{\sigma _c}$ will depend on the degree of correlation in the losses associated with the new contract c and those of current book b held by the insurer.

This entire theoretical edifice depends on the availability of a probability measure on the set of states that the insurer can use to compute the expected payoffs of possible contracts and the exceedance probabilities from which capital requirements and premiums can be derived. For this they rely on the projections coming from models of natural hazards and vulnerabilities that are typically constructed by others. But a combination of sparse historical data and the complexity of the processes determining hazard and exposure characteristics means that the precise probabilistic outputs of these models do not capture all uncertainty potentially relevant to the insurer. The problem has two characteristic manifestations: in the persistence of multiple rival models of the natural hazard (model disagreement) and residual uncertainty amongst scientists and those drawing on model projections about the reliability of the models themselves (model uncertainty). Both arise because the available data are not sufficient in quantity and quality either to uniquely identify the set of relevant causal factors responsible for the properties of the natural hazard or to fix the precise functional relationships between those that have been identified.

The modelling of the impact of hurricanes provides a useful example. It is striking, firstly, how many models of hurricane formation and of associated landfall rates are to be found in the scientific literature. Guin (Reference Guin2010) reports that the Florida Commission on Hurricane Loss Projection Methodology 2007 assessment of the modelling industry used an ensemble of 972 models, while Risk Management Solutions, a leading modelling firm, uses an ensemble of 13 models to generate the ‘Medium-Term Rate’, their preferred prediction of hurricane landfall frequency (Sabbatelli and Waters Reference Sabbatelli and Waters2015). These models differ both in their methodology – some use statistical extrapolations from historical landfalling rates, while others are physical models of hurricane formation; some identify periods of greater and lesser hurricane activity based on the hypothesized Atlantic Multidecadal Oscillation, others don’t (see Shome et al. Reference Shome, Rahnama, Jewson, Wilson and Michel2018) – and in the causal factors they incorporate, e.g. whether the influence of Indian and Pacific ocean sea-surface temperatures are incorporated in models of hurricane formation in the Atlantic. (See also Bender et al. Reference Bender, Knutson, Tuleya, Sirutis, Vecchi, Garner and Held2010; Knutson et al. Reference Knutson, McBride, Chan, Emanuel, Holland, Landsea, Held, Kossin, Srivastava and Sugi2010; Ranger and Niehörster Reference Ranger and Niehörster2012.)

Secondly, there is considerable model uncertainty for a number of reasons. The historical dataset used to score these models is small, as large hurricanes are infrequent. HURDAT2, the standard database for hurricanes hitting the Atlantic coast of the USA, is moderate in size, with ∼300 storms to date and only 1/3 of those counting as ‘major hurricanes’. If we split the dataset by region the numbers drop well below what is typically regarded as sufficient to form a reliable predictive statistical model and modellers frequently resort to creating ‘statistical storms’ to expand and ‘fill in’ the dataset. Model confirmation is further complicated by the fact that scientists expect climate change to affect hurricane generation, which implies that in the future key climate variables which drive hurricane formation will be outside of their historical ranges. Finally, there is general recognition that existing models omit potentially relevant facts such as the effects of aerosols and pollution. Hazard metrics exclude many characteristics known to be relevant such as duration of inundation, flow velocity and pollution levels.

Similar problems arise in assessing the vulnerability of communities to a hurricane hit and of the financial losses associated with it. Claims experience is insufficient for risk estimation in cases of catastrophic loss because the paucity of claims data and trends in the underlying processes make the past an inadequate guide to the future. These trends include changes in exposure characteristics of populations due to factors such as urbanization, changes in vulnerability characteristics such as infrastructure (e.g. flood defences) and regulation (e.g. building standards), and changes in the processes determining the frequency and severity of the natural hazards themselves originating in climate change.

In a nutshell, catastrophe insurers must make decisions not just under risk but under ambiguity, i.e. in circumstances in which they should not have full confidence in any single probability measure of the uncertainty they face. This fact seems to be at least partially recognized by insurers. There is growing empirical evidence for instance that insurers and (particularly) reinsurers charge an ‘ambiguity premium’ when selling coverage against catastrophic events (Kunreuther et al. Reference Kunreuther, Meszaros, Hogarth and Spranca1995; Cabantous Reference Cabantous2007; Dietz and Niehörster Reference Dietz and Niehörster2021), and some evidence that insurers are reluctant to supply coverage in these conditions (Kunreuther et al. Reference Kunreuther, Hogarth and Meszaros1993), both expressions of less than full confidence in model-based expected loss projections and an aversion to the uncertainty regarding their reliability. On the other hand, there is little evidence of explicit modelling of ambiguity, nor of procedures for managing it within insurance companies (beyond the kind of averaging techniques described later). This in turn may partially reflect the aforementioned sparsity of theoretical work on insurance decision making under ambiguity and of evaluations of the suitability of the various proposals for ambiguity-sensitive decision rules to insurance applications.

3. Decision Making Under Ambiguity

There is wide recognition in the literature on decision making under ambiguity that it is reasonable for decision makers to be sensitive to the quantity and quality of information available to them and, in particular, to exhibit ambiguity aversion in the form of preferring actions with better scientifically understood consequences. I will focus here on the class of decision models that respond to this by looking at more than just a single probabilistic estimate and which instead give consideration to sets of such probabilities and to the corresponding range of expected benefits and losses that they induce. This approach implies that decisions about pricing and capital holdings should be based on the characteristics of this range. Other prominent decision models such as Choquet expected utility (Schmeidler Reference Schmeidler1989) use non-probabilistic inputs and I will not consider them here.Footnote 3

A couple of considerations animate the proposals based on sets of probability functions. One is that in situations of ambiguity a decision maker is justified in giving greater weight to the downside risks of alternative actions than the upside opportunities. The most popular version of this, known as the maximin EU rule, prescribes choice of the action that maximizes the minimum expected benefit (Levi Reference Levi1974; Gilboa and Schmeidler Reference Gilboa and Schmeidler1989). Others, such as the alpha-maximin rule, recommend choice based on a ‘pessimism’-weighted average of the minimum and maximum expected benefit associated with each action (Ghirardato et al. Reference Ghirardato, Maccheroni and Marinacci2004), or on the best and minimum estimates of expected benefit (Ellsberg Reference Ellsberg1961), or even on all of the expected benefit estimates, such as the so-called ‘smooth ambiguity’ rule (Klibanoff et al. Reference Klibanoff, Marinacci and Mukerji2005). A second thought is that agents should look for actions or policies that achieve pregiven goals robustly in the sense that they can be expected to reach these goals under all assumptions. More precisely, an action is robust if the expected benefit of performing it is over a required threshold when calculated relative to every probability function in the set of those qualifying for consideration (Gärdenfors and Sahlin Reference Gärdenfors, Sahlin, Gärdenfors and Sahlin1988; Ben-Haim Reference Ben-Haim2006; Nehring Reference Nehring2009).

There is a lot to be said about the relationship between these different proposals and about their relative merits but, for present purposes, it suffices to note that all of them face the same challenge, namely to explain what determines the size of the set of probability functions that are to serve as inputs to the decision-making rule. This is a question that gets surprisingly little attention in the theoretical literature; indeed it is largely non-committal even on whether it is something that should be treated as a subjective parameter, reflecting an attitude on the part of the decision maker to ambiguity, or as an objective one determined by how ambiguous the situation is, as a matter of fact. While this issue may not seem important if the aim is to axiomatically characterize different theories, it is manifestly so from the perspective of guiding decision making.

To explore the problem, it will suffice to consider one illustrative application to the setting of capital reserves and pricing of premiums under ambiguity, involving the application of the maximin EU rule. Let ${\pi _b} = \left\{ {P_b^1, \ldots, P_b^n} \right\}$ be the set of exceedance probabilities for a book b associated with n candidate hazard projections. For any $P_b^i \in {\pi _b}$ and threshold $\kappa $ , let $\widehat x_k^i$ be defined as the minimum amount x such that $P_b^i( \gt x) \le \kappa $ . Then a maximally cautious approach to capital reserves would be to require that they be set at the minimum holding such that the probability of a loss greater than this amount is lower than the chosen threshold on every probability function in the set; i.e. that for book b:

(3) $${Z_b} = {\rm MIN} \left\{ {x{:}\;\forall P_b^i \in \pi, \;P_b^i\;\left( { \gt x} \right) \le \kappa } \right\} = {\rm{MAX}}\left\{ {{\rm{\;}}\widehat x_k^i{:}\ P_b^i \in \pi } \right\}$$

Less cautious approaches would follow from the adoption of one of the other rules for decision making under ambiguity. Dietz and Walker (Reference Dietz and Walker2017), for instance, apply the alpha-maximin rule to propose that capital holdings be set to the minimum amount such that a weighted average of the maximum and minimum probability that losses exceed this amount is below the threshold. In all cases however the implications for the size of capital holding that is recommended will depend on the size of the set of exceedance probabilities.

To determine this set it is natural to focus on the class of hazard and loss models that are worthy of consideration and the range of estimates that they produce. Such a class might be generated in a number of different ways. Where there is a model available that is known or commonly believed to best represent the underlying physical processes generating the catastrophic events, then a salient class is the one produced by varying the assumptions about parameter values and initial conditions. But when there is not, then the set should include all candidate causal and statistical models as well as the variations obtained by perturbing parameter values and initial conditions.

The obvious problem with this approach is that the range of estimates generated by a process like this is likely to be large, especially in the second case. Many of the rules for decision making under ambiguity will then recommend setting premiums and capital reserves at levels that are not commercially viable and which encode levels of ambiguity sensitivity well in excess of those reported in the empirical studies mentioned before. Moreover, there are a variety of reasons why both cat model vendors and insurers purchasing them prefer relatively precise probabilities, not least of which are the requirements imposed by regulators.

The prevailing working solution to this problem amongst vendors of cat models, and some users of them, is to achieve the required precision by averaging the outputs of the different models under consideration, weighting the models in terms of skill (typically using hindcasting to determine skill weights). There are however a number of limitations to this method (see Roussos et al. Reference Roussos, Bradley and Frigg2021). In the first place, it is only sensible to average model outputs under very specific conditions, such as when the structural assumptions underlying them are sufficiently similar. This condition is not met in much catastrophe modelling (Philp et al. Reference Philp, Sabbatelli, Roberston, Wilson, Collins and Walsh2019). Secondly, the historical dataset used to score these models is typically small because the events that matter most (the ones that cause the most damage) are rare. Consequently, hindcasting against this dataset does not significantly distinguish models. Thirdly, the range of scoring rules on offer is so diverse that almost any reasonable answer could be selected by one of them (Stainforth et al. Reference Stainforth, Allen, Tredger and Smith2007). So, the question remains of which one to select. Finally, in practice it doesn’t entirely solve the problem for the insurer since the projections based on such averaging techniques still often differ from vendor to vendor and so the insurer is still confronted with a range of estimates.

An alternative strategy to averaging over the space of all models is to restrict the set of models to be considered to those meeting some criterion, e.g. of reliability greater than some threshold (as in Gärdenfors and Sahlin Reference Gärdenfors, Sahlin, Gärdenfors and Sahlin1988) or that lie within some specified distance from the ‘best’ one, relative to some metric on the space of models (as in Hansen and Sargent Reference Hansen and Sargent1982). To implement this strategy, we need to be able to say what the criterion for inclusion should be: how reliable a model must be, for instance, or how close it must be to the reference one in order to be considered. With this, there is a risk of introducing an ad hoc filter on decision inputs.

Let us step back and consider what is at stake here. Any choice of set of probability distributions amounts in effect to a compromise between robustness and specificity. Suppose a decision depends on some parameter (say rainfall) and consider the set of all probability distributions over its values. Such a set is represented in Figure 1, with subsets (such as E and F) corresponding to a set of claims about, or estimates of, these values, namely those that are supported by all distributions in that subset. Small sets determine fine-grained, precise claims such as that (E) the probability of flooding is 0.25; larger ones, claims that are either more coarse-grained or less precise, such as that (F) the probability of flooding is between 0.2 and 0.3. Basing a decision on a more precise estimate serves the goal of optimization: this is what makes information valuable to decision makers. On the other hand, basing the decision on a larger set confers robustness on it in the sense that it will have acceptable consequences over a wider range of possible contingencies. If too little specificity is sought then either no action will be sanctioned (if drawing on the first class of rules for decision making under ambiguity) or only very cautious ones will (if drawing from the second). If too much specificity is sought, then confidence in the correctness of the decision must be sacrificed.

Figure 1. Nested sets of probability distributions over flooding events.

This trade-off between specificity and robustness can be represented by a confidence ranking of sets of probability distributions of the kind illustrated in Figure 2, where the inner, darkly filled set represents the ‘best’ probability distributions and each of the outer, lighter-filled sets contains a sufficiently expanded set of distributions to confer greater confidence on the judgements that it supports than any set of distributions contained within it. (Only three confidence levels are exhibited in this figure, but in principle the confidence ranking can be as fine-grained as the evidence allows.) Any projection supported by a set of probability distributions containing a confidence level is held with confidence equal to or greater than that level. For example, we can read off from this figure that the projection that the probability of flooding is 0.25 is held at low confidence only, but that the projection that it will be between 0.2 and 0.3 is held with medium confidence.

Figure 2. Confidence grading of nested sets of probabilities (earthquake induced losses).

Such a representation of uncertainty helps us see the limitations of the ones standardly adopted. To measure uncertainty by a single probabilistic projection is to focus exclusively on the inner set (indeed on an inner point), thereby ignoring all second-order model uncertainty. To measure it instead by a set of probabilities is to fix on one of the level-sets of the confidence ranking, thereby implicitly making a choice for the decision maker of what level of confidence they should seek in the projections they draw on. Only by looking at the full set of sets of distributions does one gets a sense of the trade-off between precision and robustness in the projections engendered by the prevailing level of scientific understanding.

A representation of the ambiguity a decision maker faces by a confidence ranking of decision-relevant projections does not by itself determine what action should be taken. The decision maker also needs to settle on the level of confidence she requires in her choice; that is, how robust she requires the chosen action to be in achieving her goals in the light of the ambiguity she faces. Let us call the characteristic of the agent that determines her confidence requirement in a particular decision problem, her cautiousness. Intuitively cautiousness is a subjective attitude that can vary between decision makers: a bold agent will require less confidence in her choice of action in any given decision problem than a more cautious one. It is also reasonable to expect, as Hill (Reference Hill2013, Reference Hill2019) argues, that how cautious an agent is will depend on what is at stake for her in the decision problem she faces: what the range of possible outcomes are of any choice of action and how much she values (or disvalues) these possible consequences, perhaps paying particular attention to the worst and best possible outcomes. Both possibilities are allowed by a formal representation of cautiousness as function of an agent and a decision problem that picks out a set of probabilistic projections, intuitively the small set of projections meeting the confidence requirement that her cautiousness dictates.

If the level required is independent of the decision problem she faces then she can simply adopt the smallest set of probabilities that meets this confidence threshold and apply one of the rules for decision making under ambiguity mentioned before (in this case the standard representation of ambiguity is sufficient for decision purposes). Plausibly however the level of confidence she will require will depend on what is at stake for her: the greater the stakes the more confidence required. So the set of probabilities functions that serves as the input to a decision rule will vary with the decision problem.

We will return to the implications for insurance decision making in due course, but first let us consider the question of what determines the confidence ranking itself. While the question of how much confidence is required for a decision is something that depends on the decision maker’s aims and values, the trade-off between specificity and robustness captured by a confidence ranking of probabilistic projections is a matter for science to determine. Scientists achieve specificity in their findings by balancing the evidence for and against different claims obtained from running models, taking measurements, conducting laboratory and field experiments and so on. They acquire confidence in these findings by obtaining more evidence and evidence of higher quality, garnered from more diverse sources.

These two considerations are quite distinct. Suppose that I want to know the probability that it will rain tomorrow. At the outset I might do no better than use an estimate of the frequency of rainy days. But, given the opportunity, I could improve this judgement by drawing on state-of-the-art meteorological models and up-to-the-minute data about prevailing conditions, consulting experts in the field, and so on. All this activity could of course leave me with exactly the same probability judgement as I started with. But something would clearly have changed as a result; not the projected probability for rain, but the confidence I am entitled to have in the projection. While the probability of rain tomorrow reflects the balance of evidence for and against this possibility, confidence reflects what Keynes (Reference Keynes1921) called the weight of evidence, something which depends on how much evidence there is, its quality and consistency, and perhaps the diversity of its sources (see Joyce Reference Joyce2005).

Much of the scientific modelling of hazards has focused on the delivery of probabilistic projections through assessment and improvement of models. But modelling is equally important for determining the robustness of projections and thereby the confidence with which they can be held. This can have significant implications for decision making. For instance, contrast a case in which exploration of the space of reasonable models reveals that they make projections that, while different, all lie within a fairly narrow range, from one in which they make projections that are scattered all over the place. (This is the sort of contrast that would be represented by Figures 2 and 3, for instance.) It could be that while the balance of evidence supports the same precise projection in the two cases, in the former the loss of specificity entailed by adopting an imprecise projection supported by most models is not significant from the decision maker’s point of view, while in the latter it is. So, in the former the gain in confidence obtained by consulting a wide range of model projections outweighs the loss of specificity, but in the latter it does not.

Figure 3. Confidence grading of nested sets of probabilities (hurricane induced losses).

4. Insurance Decisions

Let us turn now to how confidence rankings of projections – in particular, of exceedance probabilities – can support insurance decision making. Consider first the problem of setting capital reserve requirements for a book. The decision maker must decide not only what threshold they wish to apply but also the level of confidence they require that this threshold will not be exceeded. In principle this level can vary from decision to decision as a function of the stakes. But for the moment let us treat it as a constant and suppose that the decision maker fixes values for a pair of parameters $\left( {\kappa, \gamma } \right)$ where $\kappa $ , as before, is the threshold for an acceptable probability of ruin and $\gamma $ is the level of confidence required. The insurer can then compute capital reserve requirements using the threshold $\kappa $ for each of the exceedance probabilities that fall within the smallest set of such functions meeting the confidence requirement.

More formally, let $\pi _{}^\gamma = \left\{ {P_{}^1, \ldots, P_{}^n} \right\}$ be the smallest set of probability functions on Boolean algebra of payoff-relevant events sufficient to achieve confidence $\gamma $ and $\pi _b^\gamma = \left\{ {P_b^1, \ldots, P_b^n} \right\}$ be the corresponding set of probability measures on payoffs induced by book b. For any $P_b^i \in \pi _b^\gamma $ let $\mu _b^i$ and $\sigma _b^i$ be the associated expected loss and standard deviation of book b. Then an insurer who seeks to set her capital reserves at a level at which she can be sufficiently confident that the risk of ruin is below the threshold, will set them according to:

(4) $$Z_b^\gamma = {\rm MIN} \left\{ {x{:}\ \forall {P_b} \in \pi _b^\gamma {\rm{\;}},\;{P_b}\left( { \gt x} \right) \le \kappa } \right\}$$

In other words, she will choose the smallest capital sum such that the probability of ruin falls below threshold $\kappa $ with confidence $\gamma $ .

To determine the price of any new contract c the insurer will need to consider a range of (changes in) expected losses and standard deviations associated with c that is sufficiently broad as to meet her confidence requirements. She can then apply equation (1) using the calculation of capital reserves suggested above or, more directly, by applying Kreps’s pricing formula (2), in both cases using each of the exceedance probabilities induced by c. More formally, let ${\rm{\pi }}_c^\gamma $ be the set of probability measures on payoffs induced by the smallest set of probabilities sufficient to achieve confidence $\gamma $ and the new contract c. Then the highest of the resultant range of prices calculated using each of the members of ${\rm{\pi }}_c^\gamma $ should be selected. In particular, if the Kreps formula is used for pricing contracts for a given risk, then she should require:

(5) $${p_c} \ge {\rm{MAX}}\left\{ {\mu _c^i + \iota .\sigma _c^i{:}\ P_c^i \in {\rm{\pi }}_c^\gamma } \right\}$$

At any such price ${p_c}$ the insurer can expect with sufficient confidence to make a profit and avoid ruin.

In practice market competition makes individual insurers price takers and the significant decision is whether to write policies at the market price and how much exposure to accept, in the light of the ‘technical’ price obtained by application of their pricing formula. Confidence considerations should play an important role here as well. Consider, for example, a very simple case in which an insurer can decide whether to write a certain quantity of business in two different markets for protection against losses deriving from events uncorrelated with her current book (e.g. hurricane insurance in Florida and earthquake insurance in Pakistan). Suppose that the best estimate of the exceedance probabilities is the same for both contracts but that the weight of evidence supporting those for the first (say the hurricane projections) is much greater than those for the second (the earthquake projections). The situation is then as illustrated by Figures 2 and 3 in which for any confidence level the set of probabilities required to achieve that level is larger for the earthquake projections (given by Figure 2) than the hurricane ones (given by Figure 3). Application of pricing equation (5) will then yield higher minimum prices for the insurance against earthquake damage than hurricane damage. The insurer should therefore enter the first market in preference to the second if market prices for insurance are the same in both. More generally, they should prefer the first in case the difference in price required to achieve the requisite confidence of ruin avoidance exceeds the difference in the price for insurance contracts in the two markets.

The argument of the previous paragraph implicitly rests on the assumption that the insurer’s exposure to the two events (the hurricanes and earthquakes) is roughly the same. When this is not the case consideration must also be given to the opportunity to hedge risks afforded by diversifying one’s portfolio of business. To keep things simple, suppose that the insurer has already written a good deal of hurricane insurance but none for earthquakes and must now choose between writing more contracts for hurricanes or writing the same volume of business in insurance against earthquake damage. Now two considerations will need to be balanced: the fact that writing earthquake insurance affords a hedging of the risks and the fact that projections of earthquake-caused losses are more ambiguous. We can do this by applying the Kreps pricing formula to marginal increases in business in both markets and identifying the apportioning of business that equalizes the differences between market and technical prices.

Let us turn finally to the possibility of reducing exposure through reinsurance. Figure 4 below shows three loss exceedance curves deriving from different models of the underlying hazards and of the vulnerability of insured assets. Suppose that the insurer’s confidence requirement dictates that they consider all three curves. Application of equation (4) with a threshold of 0.2% yields a relatively high capital holding requirement of around 10 million dollars. To avoid this the insurer could seek to reinsure against the losses associated with the 5–0.2% probability range with a less ambiguity averse reinsurer. For instance, suppose the reinsurer is ambiguity neutral and uses only the grey loss exceedance curve so that application of the 0.2% threshold would imply capital holdings of 8 million dollars. Then while the insurer must set aside an additional seven million dollars to take the risk of ruin from below 5% to below 0.2%, the reinsurer can achieve this by setting aside only an additional five million dollars. The difference in the opportunity costs of a capital holding of seven and five million represents the potential gains from reinsurance.

Figure 4. Candidate loss exceedance curves.

5. Concluding Remarks

On the analysis given here, the price of catastrophe insurance depends on three factors:

  1. (1) The ambiguity profile of projections of the insured hazard.

  2. (2) The risk attitudes of insurer as measured by the probability of ruin threshold $\kappa $ and the confidence requirement $\lambda $ .

  3. (3) The exposure characteristics of the insurer’s book; in particular its size and diversity, as captured by ${\mu _b}$ and ${\sigma _b}$ .

This suggests three corresponding ways in which the price of insurance can be reduced. This first is through improvements in scientific understanding of the hazard. While new research may of course lead to higher estimates of the probability of the hazard, the increase in confidence that improvements in scientific understanding justify will serve to offset this to some degree (and magnify the effect on the price of a reduced probability estimate).

The second path is through the optimization of exposure characteristics of the insurer’s book through diversification; for instance, by off-setting exposure to one kind of peril in one region by selling contracts for different perils or in other regions. The benefits of diversification are well-understood, but the analysis here shows that they have to be balanced against increases in ambiguity that may result from selling contracts in perils or for regions for which the level of scientific uncertainty understanding is lower.

The third and final way in which prices can be reduced is by risk transfer or hedging e.g. through reinsurance or partial socialization of the risk or government take-up of layers of the exposure. Again, there is nothing new about this, but the presence of ambiguity offers additional need and opportunity for transferring exposure from the ambiguity averse to agencies that are less so. Indeed because very high levels of ambiguity are characteristic of the rare but extremely dangerous catastrophic events it may not be possible to insure against them without some transfer of exposure to the public sector. In this context, initiatives such as the 2008 Munich Climate Insurance Initiative (Linnerooth-Bayer et al. Reference Linnerooth-Bayer, Warner, Bals, Höppe, Burton, Loster and Haas2009) and the recent (2018) launch of the Global Risk Financing Facility are to be welcomed.

Acknowledgements

This work greatly benefitted from research collaborations over the past five years with Roman Frigg, Joe Roussos and Tom Philp on the topic of catastrophe modelling in support of insurance decision making.

Competing interests

None.

Richard Bradley is Professor of Philosophy in the Department of Philosophy, Logic and Scientific Method at the London School of Economics and a Fellow of the British Academy. His research is concentrated in decision theory, formal epistemology and the theory of social choice but he also works on conditionals and the nature of chance. His book Decision Theory with a Human Face (Cambridge University Press, 2017) gives an account of decision making under conditions of severe uncertainty theory suitable for rational but bounded agents. Recently he has been doing work on policy decision making under scientific uncertainty applied to climate change, natural catastrophes and pandemics.

Footnotes

1 Industry folklore has it that was hurricane Andrew and the magnitude of the losses associated with it, that persuaded the industry of the value of sophisticated cat modelling.

2 Here I follow Dietz and Walker (Reference Dietz and Walker2017).

3 See Gilboa and Marinacci (Reference Gilboa, Marinacci, Acemoglu, Arellano and Dekel2013) and Heal and Milner (Reference Heal and Milner2014) for surveys of existing proposals for decision rules for ambiguity.

References

Alary, D., Gollier, C. and Treich, N. 2013. The effect of ambiguity aversion on insurance and self-protection. Economic Journal 123, 11881202.CrossRefGoogle Scholar
Bender, M.A., Knutson, T.R., Tuleya, R.E., Sirutis, J.J., Vecchi, G.A., Garner, S.T. and Held, I.M. 2010. Modeled impact of anthropogenic warming on the frequency of intense Atlantic hurricanes. Science 327, 454458.CrossRefGoogle ScholarPubMed
Ben-Haim, Y. 2006. Info-gap Decision Theory, 2nd Edition. London: Academic Press.Google Scholar
Bernard, C., He, X., Yan, J.-A. and Zhou, X.Y. 2015. Optimal insurance design under rank-dependent expected utility. Mathematical Finance 25, 154186.CrossRefGoogle Scholar
Birghila, C., Boonen, T.J. and Ghossoub, M. 2023. Optimal insurance under maxmin expected utility. Finance and Stochastics 27, 467501.CrossRefGoogle Scholar
Bradley, R. 2017. Decision Theory with a Human Face. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Bradley, R., Helgeson, C. and Hill, B. 2017. Climate change assessments: confidence, probability, and decision. Philosophy of Science 84, 500522.CrossRefGoogle Scholar
Cabantous, L. 2007. Ambiguity aversion in the field of insurance: insurers’ attitude to imprecise and conflicting probability estimates. Theory and Decision 62, 219240.CrossRefGoogle Scholar
Charpentier, L. 2008. The insurability of climate risks. The Geneva Papers 33, 91109.Google Scholar
Clarke, D.J. and Dercon, S. 2016. Dull Disasters? How Planning Ahead Will Make a Difference. Oxford: Oxford University Press.CrossRefGoogle Scholar
Dietz, S. and Niehörster, F. 2021. Pricing ambiguity in catastrophe risk insurance. The Geneva Risk and Insurance Review 46, 112132.CrossRefGoogle Scholar
Dietz, S. and Walker, O. 2017. Ambiguity and insurance: capital requirements and premiums. Journal of Risk and Insurance 86, 213235.CrossRefGoogle Scholar
Ellsberg, D. 1961. Risk, ambiguity, and the savage axioms. Quarterly Journal of Economics 75, 643669.CrossRefGoogle Scholar
Gärdenfors, P. and Sahlin, N. 1988. Unreliable probabilities, risk taking, and decision making. In Decision, Probability and Utility: Selected Readings, eds. Gärdenfors, P. and Sahlin, N., 313334. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Ghirardato, P., Maccheroni, F. and Marinacci, M. 2004. Differentiating ambiguity and ambiguity attitude. Journal of Economic Theory 118, 133173.CrossRefGoogle Scholar
Gilboa, I. and Marinacci, M. 2013. Ambiguity and the Bayesian paradigm. In Advances in Economics and Econometrics: Theory and Applications, Tenth World Congress of the Econometric Society, eds. Acemoglu, D., Arellano, M. and Dekel, E., 179242. New York, NY: Cambridge University Press.CrossRefGoogle Scholar
Gilboa, I. and Schmeidler, D. 1989. Maxmin expected utility with non-unique prior. Journal of Mathematical Economics 18, 141153.CrossRefGoogle Scholar
Gollier, C. 2014. Optimal insurance design of ambiguous risks. Economic Theory 57, 555576.CrossRefGoogle Scholar
Guin, J. 2010. Understanding uncertainty. AIR Worldwide Blog. www.air-worldwide.com/Publications/AIR-Currents/2010/Understanding-Uncertainty/.Google Scholar
Hallegatte, S., Bangalore, M., Bonzanigo, L., Fay, M., Kane, T., Narloch, U., Rozenberg, J., Treguer, D. and Vogt-Schilb, A. 2015. Shock Waves: Managing the Impacts of Climate Change on Poverty. Washington, DC: World Bank Publications.Google Scholar
Hansen, L.P. and Sargent, T.J. 1982. Robustness. Princeton, NJ: Princeton University Press.Google Scholar
Heal, G. and Milner, A. 2014. Uncertainty and decision making in climate change economics. Review of Environmental Economics and Policy 8, 120137.CrossRefGoogle Scholar
Hill, B. 2013. Confidence and decision. Games and Economic Behavior 82, 675692.CrossRefGoogle Scholar
Hill, B. 2019. Confidence in beliefs and rational decision making. Economics and Philosophy 35, 223258.CrossRefGoogle Scholar
Jiang, W., Escobar-Anel, M. and Ren, J. 2020. Optimal insurance contracts under distortion risk measures with ambiguity aversion. ASTIN Bulletin 50, 619646.CrossRefGoogle Scholar
Keynes, J.M. 1921. A Treatise on Probability. London: Macmillan.Google Scholar
Klibanoff, P., Marinacci, M. and Mukerji, S. 2005. A smooth model of decision making under ambiguity. Econometrica 73, 18491892.CrossRefGoogle Scholar
Kreps, R. 1990. Reinsurer risk loads from marginal surplus requirements. Proceedings of the Casualty Actuarial Society 76, 196203.Google Scholar
Joyce, J. 2005. How probabilities reflect evidence. Philosophical Perspectives 19, 153178.CrossRefGoogle Scholar
Knutson, T.R., McBride, J.L., Chan, J., Emanuel, K., Holland, G., Landsea, C., Held, I., Kossin, J.P., Srivastava, A.K. and Sugi, M. 2010. Tropical cyclones and climate change. Nature Geoscience 3, 157163.CrossRefGoogle Scholar
Kunreuther, H., Hogarth, R. and Meszaros, J. 1993. Insurer ambiguity and market failure. Journal of Risk and Uncertainty 7, 7187.CrossRefGoogle Scholar
Kunreuther, H., Meszaros, J., Hogarth, R. and Spranca, M. 1995. Ambiguity and underwriter decision processes. Journal of Economic Behavior and Organization 26, 337352.CrossRefGoogle Scholar
Levi, I. 1974. On Indeterminate probabilities. Journal of Philosophy 71, 391418.CrossRefGoogle Scholar
Linnerooth-Bayer, J., Warner, K., Bals, C., Höppe, P., Burton, I., Loster, T. and Haas, A. 2009. Insurance, developing countries and climate change. Geneva Papers on Risk and Insurance. Issues and Practice 34, 381400.CrossRefGoogle Scholar
Nehring, K. 2009. Coping rationally with ambiguity: robustness versus ambiguity-aversion. Economics and Philosophy 25, 303334.CrossRefGoogle Scholar
Philp, T., Sabbatelli, T., Roberston, C. and Wilson, P. 2019. Issues of importance to the (re)insurance industry: a timescale perspective. In Hurricane Risk, eds. Collins, J. and Walsh, K., Vol. 1. Cham: Springer.CrossRefGoogle Scholar
Powers, M. 2011. Acts of God and Man: Ruminations on Risk and Insurance. New York, NY: Columbia University Press.CrossRefGoogle Scholar
Ranger, N. and Niehörster, F. 2012. Deep uncertainty in long-term hurricance risk: scenario generation and implications for future climate experiments. Global Environmental Change 22, 703712.CrossRefGoogle Scholar
Ritchie, H. and Roser, M. 2024. Natural disasters. https://ourworldindata.org/natural-disasters.Google Scholar
Roussos, J., Bradley, R. and Frigg, R. 2021. Making confident decisions with model ensembles. Philosophy of Science 88, 439460.CrossRefGoogle Scholar
Sabbatelli, T. and Waters, J. 2015. ‘We’re Still all wondering – where have all the hurricanes gone?’ The RMS Blog. www.rms.com/blog/2015/10/27/were-still-all-wondering-where-have-all-the-hurricanes-gone/.Google Scholar
Schmeidler, D. 1989. Subjective probability and expected utility without additivity. Econometrica 57, 571587.CrossRefGoogle Scholar
Shome, N., Rahnama, M., Jewson, S. and Wilson, P. 2018. Quantifying model uncertainty and risk. In Risk Modeling for Hazards and Disasters, ed. Michel, G., 346. Amsterdam: Elsevier.CrossRefGoogle Scholar
Stainforth, D.A., Allen, M.R., Tredger, E.R. and Smith, L. 2007. Confidence, uncertainty and decision-support relevance in climate predictions. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 365, 21452161.CrossRefGoogle ScholarPubMed
Stone, J.M. 1973. A theory of capacity and the insurance of catastrophe risks. Journal of Risk and Insurance 40, 231244.CrossRefGoogle Scholar
UNDDR. 2020. Human Cost of Disasters: An Overview of the Last 20 Years 2000–2019. New York, NY: United Nations.Google Scholar
Figure 0

Figure 1. Nested sets of probability distributions over flooding events.

Figure 1

Figure 2. Confidence grading of nested sets of probabilities (earthquake induced losses).

Figure 2

Figure 3. Confidence grading of nested sets of probabilities (hurricane induced losses).

Figure 3

Figure 4. Candidate loss exceedance curves.