Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-23T08:55:20.185Z Has data issue: false hasContentIssue false

Asset–liability modelling in the quantum era

Published online by Cambridge University Press:  26 July 2021

Rights & Permissions [Opens in a new window]

Abstract

This paper introduces and demonstrates the use of quantum computers for asset–liability management (ALM). A summary of historical and current practices in ALM used by actuaries is given showing how the challenges have previously been met. We give an insight into what ALM may be like in the immediate future demonstrating how quantum computers can be used for ALM. A quantum algorithm for optimising ALM calculations is presented and tested using a quantum computer. We conclude that the discovery of the strange world of quantum mechanics has the potential to create investment management efficiencies. This in turn may lead to lower capital requirements for shareholders and lower premiums and higher insured retirement incomes for policyholders.

Type
Sessional Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© Institute and Faculty of Actuaries 2021

1. Introduction

1.1. Overview

This paper provides a summary of historical and current practices in asset–liability management (ALM) used by actuaries. We also give an insight into what ALM may be like in the immediate future, demonstrating how quantum computers can be used for ALM. A quantum algorithm for optimising ALM calculations is presented and tested using a quantum computer. To our knowledge, no similar actuarial paper has yet been produced on this topic.

2. History of Asset–Liability Management

2.1. The Origins of ALM

Actuaries have carried out ALM since the first insurance companies projected liabilities and interest accrued on their assets. In 1748, the “Fund for a Provision for the Widows and Children of the Ministers of the Church of Scotland” was managed by actuaries Robert Wallace and Alexander Webster. The fund was valued at £18,620 in 1748 at which point they projected its value to be £58,348 by the year 1765. In fact, they were out by £1, with the actual amount being £58,347 in 1765 (Ferguson, Reference Ferguson2009).

A principle-based description of ALM was given in 1862 by Arthur Bailey. In a presentation to the Institute of Actuaries, Bailey set out five principles to guide investment for life insurance funds:

  1. 1. That the first consideration should invariably be the security of the capital.

  2. 2. That the highest practicable rate of interest be obtained, but that this principle should always be subordinate to the previous one, the security of the capital.

  3. 3. That a small proportion of the total funds (the amount varying according to the circumstances of each individual case), should be held in readily convertible securities for the payment of current claims, and for such loan transactions as may be considered desirable.

  4. 4. That the remaining and much larger proportion may safely be invested in securities that are not readily convertible; and that it is desirable, according to the second principle, that it should be so invested, because such securities, being unsuited for private individuals and trustees, command a higher rate of interest in consequence.

  5. 5. That, as far as practicable, the capital should be employed to aid the life assurance business. (Bailey, Reference Bailey1862)

Even at this early stage, these principles provide an excellent summary of what an investment plan for a life insurance fund should aim for. In 1933, Charles Coutts (President-Elect of the Institute of Actuaries) described the ALM challenge as “an attempt should be made to ‘marry’ the liabilities and the assets as far as possible” (Turnbull, Reference Turnbull2017 P103). This summarises the principles in Bailey’s paper, but also highlighted that “no actuarial framework for investment strategy existed beyond Bailey’s 70-year-old investment principles” (Turnbull, Reference Turnbull2017 P103). This was to change over the next two decades with the introduction of formal quantitative methods of analysis.

2.2. Quantitative Methods in ALM

The move from a principles-based approach towards a more quantitative approach was started by Frederick Macaulay (Reference Macaulay1938) in which he defined the duration of a bond (now known as “Macaulay Duration”) as the weighted average maturity of the cash flows.

$${\rm{Macaulay}}\,{\rm{Duration}} = {{\sum\nolimits_{i = 1}^n {{t_i}} P{V_i}} \over {\sum\nolimits_{i = 1}^n P {V_i}}}$$

where

t is the time period in years;

i is the index of the asset cash flow;

${PV}_i$ is the present value of the i th cash flow (see Hull, Reference Hull1993 P 99–101).

The next major step which dominated actuarial practice for decades was by Frank Redington with the concept of “immunization”. In Redington’s Reference Redington1952 paper “Review of the Principles of Life-Office Valuations” a section is set aside to “Matching of Investment – Immunization”, which discusses the principles before proposing a simple quantitative structure to ensure assets and liabilities are carefully matched.

“In its widest sense this principle includes such important aspects as the matching of assets and liabilities in currencies. In the narrower … ‘matching’ implies the distribution of the term of the assets in relation to the term of liabilities in such a way as to reduce the possibility of loss arising from a change in interest rates” (Redington, Reference Redington1952 P3). The quantitative approach is described below:

Let ${L_t}$ be the expected net outgo of the existing business in calendar year t.

Let ${A_t}$ be the expected proceeds from the existing assets in calendar year t.

Let ${V_L}$ be the present value of the liability outgo at the force of interest δ, so that ${V_L} = \;\sum {v^t}{L_t}$ .

Let ${V_A}$ be the present value of the asset proceeds at the force of interest δ, so that ${V_A} = \;\sum {v^t}{A_t}$ .

Assuming ${V_A} = {V_L}$ any excess being “free” funds to invest separately.

If the force of interest changes from δ to δ + ϵ with a change of ${V_A}$ to ${V_A}'$ and ${V_L}$ to ${V_L}'$ , then the position after the change of interest is given by Taylor’s theorem:

$$V_A^\prime - V_L^{\prime} = {V_A} - {V_L} + \varepsilon {{d\left( {{V_A} - {V_L}} \right)} \over {d\delta }} + {{{\varepsilon ^2}} \over {2!}}{{{d^2}\left( {{V_A} - {V_L}} \right)} \over {d{\delta ^2}}} + \ldots $$

The first term vanishes as ${V_A} = {V_L}$ .

Redington required the first derivative also to be zero for the fund to be immunised $\left({\rm {i.e.}} {{d\left( {{V_A} - {V_L}} \right)} \over {d\delta }}\right)$ . If the second derivative is positive, then since the coefficient ${{{\varepsilon ^2}} \over {2!}}$ is positive whether ϵ is positive or negative, any change in the force of interest will result in a profit provided the change is small and so the higher terms (cube and higher) in the Taylor expansion do not have an impact. A satisfactory immunisation policy can be expressed symbolically in the two equations:

(1) $${{d\left( {{V_A} - {V_L}} \right)} \over {d\delta }} = 0$$
(2) $${{{d^2}\left( {{V_A} - {V_L}} \right)} \over {d{\delta ^2}}} \gt 0$$

Equation (1) can be written as $\sum t{v^t}{A_t} = \sum t{v^t}{L_t}$ .

Equation (2) can be written as $\sum {t^2}{v^t}{A_t} \gt \sum {t^2}{v^t}{L_t}$ .

Redington describes the approach with reference to interest rate changes, but there is no reason it cannot be extended to include inflation, currency and other risks the fund is exposed to. Redington notes that there are infinitely many solutions to equations (1) and (2), but does not give information into which solutions might be preferable (i.e. as could be done with reference to something like Bailey’s five principles, such as maximising yield, whilst minimising credit risk …). Furthermore, the solutions look at the single present values and duration of the assets, without consideration of differences in cash flows in particular years. To be able to address these issues, it would take a significant increase in computing power for the next stage in quantitative ALM to progress which is explored in the next section.

3. Current Practices in Asset–Liability Management

3.1. Specification of the Problem

At this point, it makes sense to define an aim of ALM which captures the principles of both Bailey and Redington. First of all, we are looking specifically at fixed income assets or assets with payments varying with some form of inflation at pre-specified interval; not equities, property nor options.

The problem is to maximise yield on assets (in line with Bailey principle [2]), whilst ensuring cash flows match the liabilities to specified requirements. The specified requirements can vary depending on regulations and company risk tolerances, but some typical requirements are:

  • That the present value of asset cash flows (de-risked by default allowances) is equal to the present value of liability cash flows.

  • That the difference between asset and liability cash flows in any given year is less than a prescribed amount.

  • That solvency capital requirements for credit risk, interest rate risk, inflation risk and currency risk are below prescribed amounts.

  • That limitations are in place on assets in certain credit ratings (e.g. sub-investment grade).

  • That limitations are in place on illiquid assets.

Redington noted there are an infinite number of solutions to the equations he proposed. This is because there is a large universe of assets that could be considered, and the amounts of each holding can be varied in any number of possible combinations. The universe of assets could be all assets available in the market; on the investment analysts’ recommendation list or some other subset.

Choosing the combination of assets that best fit the above criteria is a constrained optimisation problem, which can be solved using the methods described in the next section.

3.2. Operations Research Techniques

A simple algorithm to describe (but difficult to implement) is to consider all combinations of all assets and reach a solution to the problem specified in section 3.1 that also meets the constraints. Whilst this algorithm would work eventually, even with the fastest supercomputer such a simple brute force approach would take many billions of years. For example, a set of 100 assets would have ${2^{100}}$ possible combinations, which is of the order of ${10^{30}}$ . If a calculation on each combination took a computer 1 microsecond, then the total time required would be longer than the time elapsed since the start of the universe. In practice, the available number of assets is much higher than 100 and we can have any amount of each asset making it far more complex than the simple example above. Operations research techniques were developed to produce algorithms for these types of problems to be able to solve them in a very short time frame.

During the Second World War, operations research techniques were widespread (e.g. as later described by Dyson in Reference Dyson2006). And following the war, these techniques were applied in industry, government and businesses. Benjamin (Reference Benjamin1958) describes how these techniques can be used for actuarial valuation and then several papers apply these techniques to ALM including Chambers & Charnes (Reference Chambers and Charnes1961) and Cohen & Hammer (Reference Cohen and Hammer1967). Operations research includes a wide variety of techniques, but typically it involves optimising some target, subject to a number of constraints and doing so in a short time frame. For example, inventory management might target minimising waste whilst always ensuring enough product is available for sale at any point in time. The complexity of these problems is usually above what could be solved manually and so requires a computer to solve.

Two actuarial papers from the 1990s Kocherlakota et al. (Reference Kocherlakota, Rosenbloom and Shiu1990) and Brockett & Xiat (Reference Brockett and Xiat1995) describe the cash flow matching challenge in detail showing how different techniques can be applied directly to solve the problem described in the previous section.

Using these techniques, it is possible to produce optimised asset strategies for almost any number of constraints a company requires, and this is now common practice amongst actuaries. However, even with operations research techniques, it can take minutes to identify optimal asset portfolios and in some calculations, this might be too long. For example, if doing a capital calculation of 500,000 scenarios, optimising the portfolio in each scenario might again require hundreds of computers working simultaneously over several days. An even faster approach is emerging and described in more detail in the next section.

4. Quantum Mechanics and Quantum Computing

4.1. Background from Quantum Mechanics

Quantum computers work differently from classical computers and can be used to perform calculations orders of magnitude faster. We give some background to quantum mechanics before describing quantum computers and how they can solve the ALM challenge.

The history of the strange world of quantum mechanics can be traced back to Young’s double-slit experiment demonstrating the wave nature of light in 1801. During the 1920s, the Copenhagen interpretation of quantum mechanics was devised, largely by Bohr and Heisenberg, and remains the most common interpretation. According to this interpretation, microscopic quantum objects do not have definite properties prior to being measured, and quantum mechanics can only predict the probability distribution of a measurement’s possible results, such as its location. The wave function ψ is a mathematical representation of the quantum state of a quantum system and the probabilities for the possible results of measurements can be derived from it. The Schrödinger equation determines how wave functions evolve over time. Upon measurement of the object, the wave function collapses into a single state.

A fundamental principle of quantum mechanics is superposition. This states that if a physical system may be in one of many configurations, then the most general state is a combination of all these possibilities, where the amount in each configuration is specified by a complex number. This is sometimes informally described as an object being in two places at once.

Superposition can be imagined using Figure 1 which is interpreted from Rosenblum & Kuttner (Reference Rosenblum and Kuttner2012). Imagine a quantum object, such as a photon or electron, is randomly sent into one of the two boxes, but the observer does not know into which box it arrived. Each box has a slit that can be opened to release the object and send it towards a screen where its position is measured as a dot on the screen. If the observer opens both slits at the same time, and repeats this experiment many times, then an interference pattern will emerge on the screen as further dots appear. This suggests that the object behaves like a wave. Furthermore, this suggests the object was in both boxes, and is an example of superposition. When the object reaches the screen, the wave function collapses upon measurement and a single dot appears.

Figure 1. Visualising superposition.

Another fundamental principle of quantum mechanics is entanglement. If, for example, two electrons are generated in a way such that their spin property is not independent, so that if one is spin-up then the other must be spin-down, then they are entangled. If a measurement of spin is made on one of the electrons, collapsing its wave function, then the spin of the other electrons becomes known instantly, even though they may be separated by a large distance. This apparent faster than light communication, referred to as non-locality and as spooky action at a distance, led to the Einstein–Podolsky–Rosen paradox (Einstein et al., Reference Einstein, Podolsky and Rosen1935), which argued that the quantum wave function does not provide a complete description of reality. Subsequently, a theorem (Bell, Reference Bell1964) was devised that was later experimentally tested by Freedman & Clauser (Reference Freedman and Clauser1972), which supported non-locality and ruled out local hidden variable theories.

The properties of superposition and entanglement are fundamental to quantum computing.

4.2. Qubits and Quantum Computing

Quantum computing uses quantum bits, or qubits. These are the basic units of quantum information and can be compared to classical binary bits. Whilst the state of a classical bit can be only either 0 or 1, a qubit can be a superposition of both.

Using the quantum property of entanglement, it is possible to create a set of qubits. The power of quantum computing can be imagined by comparing three classical bits with three qubits. There are eight (i.e. ${2^3}$ ) possible combinations of the classical bits and a classical computer may perform a calculation based on each combination sequentially. A quantum computer could use all eight states simultaneously due to the properties of superposition and entanglement. If this is extended to a larger number of bits, such as 1,000 bits, with an unimaginably large number of possible combinations, then a classical computer would take an increasingly impractical length of time to complete the calculation.

Qubits are used in quantum annealing computers and universal quantum computers. Quantum annealing is used for optimisation problems. Universal quantum computers are more powerful and have a wider scope of applications but are harder to build. At the current time, the number of qubits is of the order of 5,000 for quantum annealing and of the order of 100 for universal quantum computing. For further information on qubits and quantum computing, see the Appendix.

5. Quantum Computing in Asset–Liability Management

5.1. Introduction

To consider the potential use of quantum computers in ALM, we consider the Solvency II matching adjustment problem for insurers. This is an optimisation problem equivalent to selecting the subset of assets so that the market value of assets is minimised, subject to the constraints from the regulator relating to the asset’s de-risked cash flows being adequately matched with the insurer’s liability cash flows. This is an example of the ALM problem specified in section 3.1 above.

This can be formulated as a Quadratic Unconstrained Binary Optimization (QUBO) model for use in a quantum annealing computer. A QUBO model takes the following form:

(3) $${\rm{min}}\left( {\mathop \sum \limits_i {l_i}{x_i}\; + \mathop \sum \limits_i \mathop \sum \limits_{j \gt i} {q_{i,j}}{x_i}{x_j}} \right){\rm{\;\;such\;that\;}}{x_i} \in \left\{ {0,1} \right\}$$

The coefficients ${l_i}$ and ${q_{i,j}}$ are constant numbers that define the problem. The binary variables ${x_i}$ and ${x_j}$ are the values to be solved. This can also be expressed as a problem to minimise ${x^T}Qx$ , where $x$ is a vector of the binary variables and matrix $Q$ is a square matrix of constants. For further information, see Furini & Traversi (Reference Furini and Traversi2019) and Glover et al. (Reference Glover, Kochenberger and Du2018).

5.2. Formulating the Problem

We consider the matching adjustment problem with n assets, T years of cash flows, and with the UK’s Prudential Regulation Authority (PRA) Test 1 and Test 3 constraints (PRA SS7/18, 2018).

The constraints are re-expressed as follows:

$\mathop \sum \limits_{i = 1}^n {x_i}A{\left( t \right)_i} - Z{\left( t \right)_1} + s{\left( t \right)_1} = 0{\rm{\;\;}}$ Constraint for time t from PRA Test 1, where:

$\mathop \sum \limits_{i = 1}^n {x_i}A{\left( T \right)_i} - {Z_3} + {s_3} = 0$ Constraint from PRA Test 3, where:

The objective function to minimise becomes

(4) $$\mathop \sum \limits_{i = 1}^n {x_i}{V_i} + \mathop \sum \limits_{t = 1}^T \left( {P{{\left( t \right)}_1}{{\left( {\mathop \sum \limits_{i = 1}^n {x_i}A{{\left( t \right)}_i} - Z{{\left( t \right)}_1} + s{{\left( t \right)}_1}} \right)}^2}} \right) + {P_3}{\left( {\mathop \sum \limits_{i = 1}^n {x_i}A{{\left( T \right)}_i} - {Z_3} + {s_3}} \right)^2}$$

where $P{\left( t \right)_1}\;$ and ${P_3}$ are penalty factors to be chosen for the constraints.

The objective function is a QUBO model where the objective is to minimise ${x^T}Qx$ with matrix $Q$ . An example of the objective function and matrix Q is shown in a later section.

5.3. Number of Qubits

The number of qubits used in the above objective function is $\;n + \mathop \sum \limits_{t = 1}^T \left( {b{{\left( t \right)}_1}} \right) + {b_3}\;$ . However, if it is deemed practical to use the same number of slack variables for each constraint, and if this is chosen to be ${\log _2}L\left( T \right)$ , then this can be simplified to $n + \left( {T + 1} \right){\log _2}L\left( T \right)$ where the logarithm is rounded up.

So, for example, if an insurer had 3,000 assets, 50 years of projection and liabilities of £50bn expressed in units of £1m, then the number of qubits is $3,000 + \left( {50 + 1} \right){\log _2}50,000$ which equals 3,816.

If a fraction of each asset were to be allowed, then this would increase the number of required qubits.

6. Worked Example

6.1. Formulating the Sample Problem

The following example is shown to illustrate the problem. We consider a simplified example, with five assets, 5 years of cash flows and only the PRA Test 3 constraint applying.

The inputs to the problem are shown below and in Table 1.

Table 1. Inputs to the problem

We choose the penalty factor ${P_3}$ to be 10. This is of a magnitude that is intended to materially penalise cases where the slack variables ${s_3}$ do not result in the constraint being nearly met, whilst only having a negligible penalty if the constraint is almost fully met.

Items derived from the inputs are shown below and in Table 2.

$${Z_3}= L\left( 5 \right){\rm{\;}} = 208.0$$

${s_3} = \mathop \sum \limits_{i = 1}^{{b_3}} - {2^{\left( {i - 1} \right)}}{x_{5 + i}}$  where $\;{b_3} = \;{\log _2}\left(\mathop \sum \limits_{i = 1}^nA{\left( T \right)_i} - {Z_3}\right) = 6$

Table 2. Discount factors derived from the inputs

The objective function to minimise becomes

(5) $$\mathop \sum \limits_{i = 1}^5 {x_i}{V_i} + {P_3}{\left( {\mathop \sum \limits_{i = 1}^5 {x_i}A{{\left( 5 \right)}_i} - {Z_3} + {s_3}} \right)^2}$$

6.2. QUBO form

The above optimisation problem is expressed as a QUBO model to minimise ${x^T}Qx$ with matrix $Q$ as shown in Figure 2. The entry in row $i$ and column $j$ of the matrix is the coefficient of ${x_i}{x_j}$ in the objective function. The leading diagonal represents the coefficients of ${x_i}$ as this is equal to ${x_i}{x_i}$ for qubits.

Figure 2. Matrix $Q$ .

6.3. Expected Results from Classical Analysis

In this section, we carry out a “classical” analysis, by which we mean to look at all the possible combinations to identify the optimal solution within our specified constraints. There are three combinations that meet the constraint, as shown in Table 3. The optimal combination has the lowest market value of assets, and consequently, the highest matching adjustment and lowest present value of liabilities. This combination has the first four assets only, i.e. ${x_i} = 1\;$ for $i \le 4$ and ${x_5} = 0$ . This does not involve the slack variables and the PRA Test 3 constraint is met if $\mathop \sum \limits_{i = 1}^5 {x_i}A{\left( T \right)_i} - {Z_3}\; \gt 0$ .

Table 3. Analysis of possible combinations

6.4. Expected Results from Quantum Computer

The results we would expect from the quantum sampler are shown in Table 4. These were determined classically by considering each of the ${2^{11}}$ (i.e. 2,048) combinations and identifying the most optimal three solutions. In these results, the qubits ${x_i}\;$ where $i \gt 5$ can be discarded as these are slack variables.

Table 4. The top three expected results from the quantum computer

6.5. Actual Results from Quantum Computer

We ran the problem in a quantum annealing computer. The actual first three results are shown in Table 5. The top row is the result with the lowest energy and is indeed the optimal choice of assets as shown in the earlier sections.

Table 5. The actual first three results from the quantum computer

7. Conclusions

In this paper, we considered the potential use of quantum computing in ALM. A quantum algorithm for optimising ALM calculations was presented and tested using a quantum computer. We conclude that quantum computers are useful for ALM.

Optimisers give a mathematical optimum which managers of matching adjustment portfolios can review alongside other qualitative factors, which they may consider important. The optimal portfolio does not need to be selected, but it is of great value to know where the limits and constraints are. For example, it is not necessary to have a portfolio at exactly the highest regulatory allowable constraint, but useful to know where is the limit, and what matching adjustment can be achieved at different limits.

The discovery of the strange world of quantum mechanics has the potential to lead to new ways of determining an optimal matching adjustment. This in turn may lead to lower capital requirements for shareholders and lower premiums and higher insured retirement incomes for policyholders.

Acknowledgements

We thank the Institute and Faculty of Actuaries (IFoA) for support in the preparation of this paper. We are grateful for the support provided by Dawn McIntosh and Niki Park, both IFoA staff. We appreciate our anonymous peer reviewers for their helpful feedback. We thank Andrew Smith for chairing, and helping us prepare for, the IFoA sessional meeting.

Appendix. Further information on qubits and quantum computing

Further information on qubits

Qubits are the basic units of quantum information and can be compared to classical binary bits. Whilst the state of a classical bit can be only either 0 or 1, a qubit can be a superposition of both.

As described in Rieffel & Polak (Reference Rieffel and Polak1998), a qubit is a unit vector in a two-dimensional complex vector space for which a particular basis, denoted by {|0⟩, |1⟩}, has been fixed. Using bra–ket notation, a qubit can be in a superposition of |0⟩ and |1⟩ such as c0|0⟩ $\; + \;$ c1|1⟩ where the coefficients are complex numbers describing how much goes into each state, such that |c0|2 $ + $ |c1|2 $ = $ 1. If such a superposition is measured with respect to the basis {|0⟩, |1⟩}, the probability that the measured value is |0⟩ is |c0|2 and the probability that the measured value is |1⟩ is |c1|2.

Even though a qubit can be in infinitely many superposition states, it is only possible to extract a single classical bit’s worth of information from it. This is because the measurement of a qubit results in it being in only one of the two states, i.e. its wave function collapses. Furthermore, quantum states cannot be cloned, so it is not possible to copy the qubit and measure the copy.

Types of quantum computing

There are three primary types of quantum computing. These are quantum simulation, quantum annealing and universal quantum.

Quantum simulation involves special-purpose devices that use quantum effects to simulate and understand a model of a real physical system. By comparing its results to the real physical system, it can be considered whether the model accurately represents the system. For further details see Johnson et al. (Reference Johnson, Clark and Jaksch2014).

Quantum annealing is used for optimisation problems. Both classical simulated annealing and quantum annealing involve determining the lowest energy states of a problem. In classical annealing, a system is put in an equilibrium state at a high temperature and then the temperature is lowered. Using thermal fluctuations, the ground state is reached. In quantum annealing, solutions are found using quantum fluctuations rather than thermal fluctuations, as mentioned in Suzuki (Reference Suzuki2009). A quantum approach helps the system escape a local minimum and arrive at a global minimum. This is due to superposition and quantum tunnelling, which allows traversing through “solid” barriers, whereas a classical approach must rely on thermal jumps to overcome any energy barriers, as mentioned in Ruiz (Reference Ruiz2014).

Universal quantum, or universal gate quantum, is the most powerful and with the widest scope of applications. However, it is the hardest to build. This involves building reliable qubits with quantum circuit operations like the classical computer operations.

Supplementary notes on the formation of the example QUBO matrix

The matrix $Q$ in section 6.2 is derived from equation (5) which can be expanded out so that it takes the form of equation (3). To illustrate this, the coefficient in the top left corner of matrix $Q$ , with a value of −180,329, represents the coefficient of ${x_1}$ from equation (5), which is equal to ${x_1}{x_1}$ for qubits. The coefficient can be seen to be ${V_1} + {P_3}\;A{\left( 5 \right)_1}^2 - 2\;{P_3}\;A{\left( 5 \right)_1}{Z_3}$ , where ${V_1} = 40,\;{P_3} = 10,\;\;A{\left( 5 \right)_1}$ = 49.2 and ${Z_3}\,$ = 208.0 where the latter two are shown to one decimal place. The matrix $Q$ is then ready for use in a quantum annealing computer for optimisation using quantum annealing as described above.

Disclaimer

The authors accept no responsibility or liability to any person for loss or damage suffered because of their placing reliance upon any view, claim or representation made in this publication. The information and expressions of opinion contained in this publication are not intended to be a comprehensive study, nor to provide actuarial advice or advice of any nature and should not be treated as a substitute for specific advice concerning individual situations.

References

Bailey, A. (1862). On the Principles on which the Funds of Life Assurance Societies should be Invested. The Assurance Magazine, and Journal of the Institute of Actuaries, 10(3), 142147.CrossRefGoogle Scholar
Bell, J. (1964). On the Einstein Podolsky Rosen Paradox. Physics Physique Fizika, 1, 195200.CrossRefGoogle Scholar
Benjamin, S. (1958). The Application of Elementary Linear Programming to Approximate Valuation. Journal of the Institute of Actuaries (1886–1994), 84, 136.CrossRefGoogle Scholar
Brockett, P. & Xiat, X. (1995). Operations Research in Insurance: A Review. Transactions of Society of Actuaries, 47, 2429.Google Scholar
Chambers, D. & Charnes, A. (1961). Inter-Temporal Analysis and Optimization of Bank Portfolios. Management Science, 7(11), 393409.CrossRefGoogle Scholar
Cohen, K.J. & Hammer, F.S. (1967). Linear programming and Optimal Bank Asset Management Decision. Journal of Finance, 22, 4261.CrossRefGoogle Scholar
Dyson, F. (2006). A Failure of Intelligence: Part 1. MIT Technology Review www.technologyreview.com/2006/11/01/227625/a-failure-of-intelligence [accessed December 2020].Google Scholar
Einstein, A., Podolsky, B. & Rosen, N. (1935). Can Quantum-Mechanical Description of Physical Reality be Considered Complete? Physical Review, 47(10), 777780.CrossRefGoogle Scholar
Ferguson, N. (2009). The Ascent of Money – A Financial History of the World. London: Penguin Books. P.195.Google Scholar
Freedman, S. & Clauser, J. (1972). Experimental test of local hidden-variable theories. Physical Review Letters, 28(14), 938941.CrossRefGoogle Scholar
Furini, F. & Traversi, E. (2019). Theoretical and computational study of several linearisation techniques for binary quadratic problems. Annals of Operations Research, 279, 387411.CrossRefGoogle Scholar
Glover, F., Kochenberger, G. & Du, Y. (2018). Quantum Bridge Analytics I: A Tutorial on Formulating and Using QUBO Models. arXiv:1811.11538 Google Scholar
Hull, J. (1993). Options, Futures, and Other Derivative Securities (2nd ed.). Englewood Cliffs, NJ: Prentice-Hall, 99101.Google Scholar
Johnson, T., Clark, S. & Jaksch, D. (2014). What is a quantum simulator? EPJ Quantum Technology, 1:10,14.CrossRefGoogle Scholar
Kocherlakota, R., Rosenbloom, E. & Shiu, E. (1990). Cashflow Matching and Linear Programming Duality. Transactions of Society of Actuaries, 42, 281293.Google Scholar
Macaulay, F. (1938). The Movements of Interest Rates, Bond Yields and Stock Prices in the United States since 1856. New York: National Bureau of Economic Research.Google Scholar
Prudential Regulation Authority (2018). Supervisory Statement 7/18, Solvency II: Matching adjustment. www.bankofengland.co.uk/prudential-regulation/publication/2018/solvency-2-matching-adjustment-ss [accessed December 2020].Google Scholar
Redington, F. (1952). Review of the Principles of Life Office Valuations. Journal of the Institute of Actuaries, 78, 286340.CrossRefGoogle Scholar
Rieffel, E. & Polak, W. (1998). An Introduction to Quantum Computing for Non-Physicists. arXiv:quant-ph/9809016v2 Google Scholar
Rosenblum, B. & Kuttner, F. (2012). Quantum Enigma: Physics Encounters Consciousness. (New ed.) London: Duckworth Overlook.Google Scholar
Ruiz, A. (2014). Quantum Annealing. arXiv:1404.2465v1 Google Scholar
Suzuki, S. (2009). A comparison of classical and quantum annealing dynamics. Journal of Physics: Conference Series, 143(1), p012002.Google Scholar
Turnbull, C. (2017). The History of British Actuarial Thought. New York: Springer.CrossRefGoogle Scholar
Figure 0

Figure 1. Visualising superposition.

Figure 1

Table 1. Inputs to the problem

Figure 2

Table 2. Discount factors derived from the inputs

Figure 3

Figure 2. Matrix $Q$.

Figure 4

Table 3. Analysis of possible combinations

Figure 5

Table 4. The top three expected results from the quantum computer

Figure 6

Table 5. The actual first three results from the quantum computer