Published online by Cambridge University Press: 29 August 2014
In practical applications of the collective theory of risk one is very often confronted with the problem of making some kind of assumptions about the form of the distribution functions underlying the frequency as well as the severity of claims. Lundberg's [6] and Cramér's [3] approach are essentially based upon the hypothesis that the number of claims occurring in a certain period obey the Poisson distribution whereas for the conditional distribution of the amount claimed upon occurrence of such a claim the exponential distribution is very often used. Of course, by weighting the Poisson distributions (as e.g. done by Ammeter [1]) one enlarges the class of “frequency of claims” distributions considerably but nevertheless there remains an uneasy feeling about artificial assumptions, which are just made for mathematical convenience but are not necessarily related to the practical problems to which the theory of risk is applied.
It seems to me that, before applying the general model of the theory of risk, one should always ask the question: “How much information do we want from the mathematical model which describes the risk process?” The answer will be that in many practical cases it is sufficient to determine the mean and the variance of this process. Let me only mention the rate making, the experience control, the refund problems and the detection of secular trends in a certain risk category. In all these cases the practical solutions seem to be sufficiently determined by mean and variance.
Let us therefore attack the problem of determining mean and variance of the risk process while trying to make as few assumptions as possible about the type of the underlying probability distributions. This approach is not original. De Finetti [5] has already proposed an approach to risk theory only based upon the knowledge of mean and variance. It is along his lines of thought, although in different mathematical form, that I wish to proceed.