This paper argues that most of the problems that actuaries have to deal with in the context of non-life insurance can be usefully cast in the framework of computational intelligence (a.k.a. artificial intelligence), the discipline that studies the design of agents which exhibit intelligent behaviour. Finding an adequate framework for actuarial problems has more than a simply theoretical interest: it also allows a technological transfer from the computational intelligence discipline to general insurance, wherever techniques have been developed for problems which are common to both contexts. This has already happened in the past (neural networks, clustering, data mining have all found applications to general insurance) but not in a systematic way. One of the objectives of this paper will therefore be to introduce some useful techniques such as sparsity-based regularisation and dynamic decision networks that are not yet known to the wider actuarial community.
Whilst in the first part of this paper we dealt mainly with data-driven loss modelling under the assumption that all the data were accurate and fully relevant to the exercise, in this second part of the paper we explore how to deal with uncertain knowledge, whether this uncertainty comes from the fact that the data are not fully reliable (e.g. they are estimates) or from the fact that the knowledge is “soft” (e.g. expert beliefs) or not fully relevant (e.g. market information on a given risk). Most importantly, we will deal with the problem of making pricing, reserving and capital decisions under uncertainty. It will be concluded that a Bayesian framework is the most adequate for dealing with uncertainty, and we will present a number of computational intelligence techniques to do this in practice.