This sessional meeting discussion relates to the paper by the IFoA Towards the Optimal Reserving Process (TORP) Working Party, which was presented at the IFoA sessional event held on Tuesday 30 November 2021.
Mr W. T. Diffey, F.I.A.: Today’s sessional event ‘Your reserves may be best estimate but are they valid?’ is presented by members of the ‘Towards the Optimal Reserving Process’ (TORP) Working Party.
I chair the Working Party, which has been in existence since 2012. It brings together actuaries, with a wide range of industry and consulting experience, to focus on the reserving process.
The objectives of TORP are firstly, to investigate common practice and determine emerging trends in the reserving process, and secondly, to communicate this information to the wider profession. Currently, TORP has sixteen members across personal and commercial lines, with representation from the UK (including Lloyds Market) and international markets.
I would like to introduce the presenters, starting with myself. I am William Diffey, the Chief Actuarial Officer at Assurant for Europe. I also have oversight of Reserving for Assurant’s Asia Pacific region.
I am joined by Al Lauder who joined DARAG as Group Chief Actuary in May 2021 after almost ten years at Barbican Syndicate and at Arch Managing Agency.
I am also joined by Ed Harrison. Ed is a Senior Consultant at LCP and advises firms on a wide range of actuarial matters, including reserving, solvency II and risk management.
We are also joined by Arun Vijay. He is a qualified actuary and manager at Deloitte with experience in reserving, capital and solvency management, and IFRS 17.
Finally, I am joined by Laura Hobern. Laura is an LCP Principal with over 15 years’ experience. She enjoys working with clients and, having worked in-house for re-insurers and insurers. She brings experience of reserving, pricing, portfolio analysis and transaction support.
Today’s discussion can be split into two parts. Firstly, Laura (Hobern) and Ed (Harrison) will focus on the current state and techniques of reserving validation and how these can be brought together into an overarching framework. Secondly, Arun (Vijay) and Al (Lauder) will discuss how the validation process will need to be adapted to allow for IFRS17 and the advancement of machine learning. This includes a case study which looks at the application of machine learning to a subset of London Market data.
I will now hand over to Laura (Hobern) who is going to introduce the framework.
Mrs L. J. Hobern, F.I.A.: The aim of the reserving validation process is to ensure that the results from the reserving process are reasonable, unbiased and include all available information. This brings enhanced governance and increased confidence in the results. In particular, it ensures that there is a thorough understanding of what the estimates represent and any associated uncertainty. Increasingly, automated validation techniques are being used as part of the process to highlight where reserves may be invalid.
During the core reserving analysis, the validation process may identify emerging trends that you might need to allow for, identify the material assumptions and how sensitive the results are to those assumptions, and also highlight areas of concern that require additional analysis.
Outside of the core reserving analysis, the validation process can be used to support the wider business, for example to provide experience analysis to feed into the pricing process, or to support internal queries or external queries from the regulator.
One of the findings of our research was that organisations’ validation processes are well developed but have been built up over time in a stepwise manner. A key recommendation from our paper is to introduce a reserving validation framework that encompasses all of these individual processes.
I will now hand over to Ed (Harrison), who will cover current validation approaches and the suggested framework in more detail.
Mr E. Harrison, F.I.A.: I have picked out six key issues from across our review of existing techniques and I will discuss how to bring these together into an overarching framework.
The advantage of having a framework is that the validation process becomes more efficient. The first two issues are 1) automation and 2) diagnostics and dashboards. These are key to keeping the validation process streamlined. In terms of automation, it is important that the A (Actual) versus E (Expected) process, and the stress and scenario testing process, can be repeated quickly as available information changes, or that processes can be easily adapted to provide new insights.
In terms of diagnostics and dashboards, a robust reserving validation framework is going to have a lot of output and contain a large amount of data. As actuaries we want to spend our time looking at the central elements of the reserving process, i.e., how claims are developing over time and how this compares to the underlying assumptions we have made. Time should be spent, when setting up the framework, to ensure the output of the validation process is targeted and proportionate. For both these issues, technology advancement brings an opportunity to streamline the validation activity.
The third issue is to ensure a joined-up approach across the business. If we take the A versus E analysis as an example, we note that it is part of reserving validation, capital requirement validation, Own risk and solvency assessment (ORSA) and the business planning process. So, we would recommend the approach of having consistent risk profile distributions, and stress and scenario tests, that are applied across all of these processes. This will help give the board a joined-up picture of the reserving risk.
The fourth area is projection versus validation granularity. At the moment, the majority of reserve validation is done at the same level of granularity, or in the same cohorts, as the underlying reserving projections. Clearly, it is important to do validation on the same triangles, the same data sets, and the same methods on which you are projecting, but in our view that should only be one part of the validation process. We should also look to carry out validation at more aggregate and more granular levels. Again, this can be facilitated in future by technology advancements.
In terms of the more aggregate levels of granularity, let us consider an example where a class of business has five perils and the claims development within each peril is volatile. As a result of the volatility, a cautious reserve is set for each peril which looks like a sensible thing to do. However, if you then aggregate the reserves to the total class level and look at the implied total loss ratios, you might see that at an overall level the reserves are overly cautious. This is a really powerful example of where performing validation at higher levels of aggregation can really help you police assumptions at a more granular level.
By reserving at a more granular level we will potentially be able to analyse new reserving drivers, as insurance processes capture more data and this data can be analysed in real time. This presents an opportunity to enhance our reserving validation. For example, in the past, if any anomalies were identified the approach would be to take a view on whether the issue is likely to persist in the future, to discuss it with claims teams and underwriters and maybe provide some subjective, qualitative overlay to the reserves.
In the future, we will have the opportunity to see live where our assumptions on homogeneity, are perhaps not as we thought, then quickly change the level of granularity that we project at and establish a reserve that takes into account all available information.
Issue five is the timing of validation activity. We are seeing that the reserving processes is becoming more compressed on two fronts. Firstly, timescales are being reduced, and secondly more information is being requested. When setting up a validation framework, it is a good opportunity to look at whether aspects of that validation can be done out of cycle, either as a precursor to the reserving process to help you know where to look, or as a post reserve review to work out where you can improve your processes.
The sixth and final issue is to continue to be aware of the limitations within both the reserving and the validation process. A log of the material limitations should be maintained as part of the organisation’s validation framework and resource plans should consider where any of these limitations could be reduced.
I am now going to hand over to Arun (Vijay) who is going to talk us through some of the challenges that IFRS 17 might bring to the reserving validation process.
Mr A. Vijay F.I.A.: As you are aware, IFRS 17 is the new accounting standard that will replace IFRS 4 and has the intention of making financial statements more consistent and transparent across the world. However, in practice the standard is quite complex, and it is likely to become more challenging to interpret and validate the results. Firstly, compared to IFRS 4, IFRS 17 requires reporting at more granular levels (individual cohorts, entity level and group reporting), increased disclosure requirements and also has tighter working day deadlines. Secondly, the work flow is more predictable under IFRS 4 with straight line hand-offs from data teams to actuarial and then onto finance, but under IFRS 17 there are multiple complex hand-offs and dependencies, which will need to be managed in a careful and coordinated manner. In combination, the above points will lead to an increase in the amount of validation that will need to be done out of cycle.
As well as the increase in granularity, we will have more complexity in the amount of data that we need for financial reporting and an increase in the amount of information that needs to be provided in the disclosures. In particular, A versus E will be a feature of the disclosures. When this is combined with the requirement to discount the reserves it will be difficult to split out the A versus E into a separate ‘discounting impact’ and a separate ‘assumptions impact’. The impact is not linear and any explanation will likely be cumbersome.
Additionally, IFRS 17 brings the requirement for a risk adjustment and the methodology for this has to be disclosed. Again, this will bring increased complexity to the reserving process and the associated explanations in the disclosures.
Naturally, we should expect to have to deal with an increased amount of questions from the investors, auditors, and board members. Auditors will already be engaged and it is certain that their scope of work will be increased (e.g. the capital model could come into scope if it produces values for the risk adjustment). All of this points to an increase in the amount of validation work that will need to be carried out and it will be important to automate as many of the controls as possible.
As I mentioned, we have non-linear effects under IFRS 17, so it is likely that we will see an increase in the use of stress testing, scenario testing, and sensitivity testing assumptions. Some of these detailed validations should be aided by automation.
These influences are only expected to play out in the long term. In the near to medium term, there are likely to be incremental changes to current validation techniques. As companies continue to invest in their data and analytical capabilities, validation approaches will develop in tandem. This leads us onto the section on machine learning by Al (Lauder).
Mr A. J. R. Lauder, F.I.A.: This section is about how the advancement of machine learning will impact on the reserving process and the underlying data structures. It uses a case study from the London Market involving a database of actuarial risk code estimates. Anyone who has worked in the London Market will have come across the standard benchmarking exercise, where annually you take the risk code data and do as many projections on the risk codes as you can, in order to create benchmarks for setting loss ratios. We took the database of ultimates spanning up to five years and tested this against an output of Machine Learning (ML) methods applied to the risk code triangles. The ML used metadata from the risk code mapping tables to model the ultimates at the level of business class. The model was trained on data up to 2016, and then error terms were calculated in two different ways. The first was to track errors for both actuary (i.e. human analysis) and ML for the fully developed years only. Then, secondly to track errors for both actuary and ML across the whole of the reserving triangle which has the benefit that you include much more data.
In conclusion, we determined the ML was as good as the actuary in later development periods, but better than the actuary in the early development periods. It was the same for both ways of measuring error. We concluded that the ML performed better in early development periods. Because it was looking across all triangles and so had more of a total market view and more data to smooth across, whereas the human will be very focused on the triangle (e.g. motor theft) they are projecting which will have very limited data in the early development periods.
Interestingly, increasing the weight that the ML model placed on total market data leads to worse performance, as compared to the actuary in later development periods.
This exercise does not use the full power of machine learning. It was only applied to aggregate data, whereas machine learning can be applied to much more granular data and bigger data sets. Given the increase in granularity of available date under IFRS 17 the demand for machine learning is only going to increase over the next 10 years. To realise the benefits, data quality and machine learning methodologies will also need to improve.
Another implication is that you will need these skillsets within your team. Even if you are using external providers, you need internal understanding of how to calibrate the models, how they work and to be able to explain the results to the board and to the regulator.
The second part of this section is around the applications of ML and Artificial Intelligence (AI) within the context of the role of reserving actuaries. Firstly, the actuary will need to become familiar with handling unstructured data e.g. tools that can read and transform claims files or other text files such as letters or slips from brokers into a dataset that can be analysed. One such technique is optical character recognition (OCR) which can lift certain fields out of a letter for example. The efficiency benefits of this technology are huge and ultimately the actuary will be able to make use of the increased amount of data.
The implications of the increase in data available for analysis is huge. If we combine the ability to automatically reserve at a lower level of segmentation with the ability to segment the portfolio in lots of different ways, based on consistently coded and accurate data, we may well find that the reserving actuary plays a more fundamental role in the performance management of the portfolio.
Another very rich source of data is the claims files. These can be read into massive databases unlocking vast treasure troves of data, which could bear fruit in the future in terms of predictive modelling. Going forward actuaries with IT skills will play a bigger part in the key business decisions of organisations. In summary, the applications of data science and AI are very broad and deep and actuaries are perfectly placed within insurance businesses to lead the transformation of how these technologies are implemented and the businesses move to the state where the quality of data they hold is much higher than it is today.
Mr Diffey: In our view, machine learning is a powerful tool, but it is hard to apply without improvements in data quality and governance. To conclude we have got more sophisticated and powerful tools in data analysis and machine learning to help meet the challenges ahead. In particular, machine learning could bring a step change in analysis of data and trends compared to the current combination of human processing on available data. If these tools are applied well, they should help actuaries meet the upcoming challenges. Knowing how and when to use the tools will be a challenge and that is why we think it is important that a structured validation framework needs to be embedded within the reserving process. Without this structure, our concern is that the output of the validation process will become fragmented and difficult to interpret and understand. This will lead to boards, regulators and practitioners alike losing confidence in the reported reserves regardless of whether they are then proven right by subsequent development. That leads us to our conclusion, which is, in a way, the title of the paper: “Your reserves may well be right, but they may not be valid, at least as best estimate”.
At this point I’ll take questions.
Are the machine learning techniques applicable outside of general insurance?
Mr Lauder: Yes, in fact machine learning may work better outside general insurance, as the data sets tend to be of better quality. Of course the reserving techniques and validation focus may be different but I think this reinforces the point that the validation needs to be embedded within the specifics of each organisation’s business.
You referred to non-linear impacts under IFRS 17. Can you give some examples?
Mr Vijay: I will give you an example of a recent impact on bodily injury claims. As well as a change in the level of underlying claims due to Covid, we had delays in claims being settled due to disruption of the legal process. Due to IFRS 17 operating on a cash-flow basis (unlike IFRS 4) the impacts did not accumulate in a simple additive manner. This was further compounded by the requirement to discount under IFRS 17.
You mentioned the need to use consistent Stress and Scenario tests for reserve modelling and for ORSA. Could you give any examples of what these may be?
Mr Harrison: If you look at the teams in isolation you have a reserving team who cluster their tests around the best estimate level. You have a capital team who look at tests a more extreme level with a particular focus on the 1 in 200 level. Then you have the risk team, who as part of the ORSA, look at a much broader range of tests. Across the business the tests often come from different models. It is only when the results get to board level, often through different routes, that the inconsistencies become apparent. My suggestion is that these inconsistencies are addressed in the validation framework so a more joined up approach across the business can be achieved.
Do you think it is possible for reserves to be valid, but wrong?
Mrs Hobern: It depends exactly what we mean by wrong, but assuming we mean that the reserves that we set now are not ultimately the same number as the claims that are paid, then I think it is certain that they are going to be wrong. The message behind the question ‘are they valid?’ is firstly that we should aim to incorporate all available information and secondly that our assumptions, analysis and conclusions are challenged to ensure that confidence in the results is high and that any limitations or uncertainty is well explained.
Mr Harrison: I would like to add a concrete example to that. When determining the unearned premium reserve for a motor portfolio in December 2019 we were unaware of the upcoming Covid pandemic. Our best estimates for motor unearned reserves in December 2019 were valid in that they took into account all of the information that was there. These reserves would typically have been justifiable to the board and to the regulator, but they would have been undoubtedly wrong due to the Covid impact.
Are validation frameworks only applicable for large companies? How can they be applied within small to medium sized companies?
Mr Harrison: I think it is the opposite, and that if you have a large team then you could get away with having a non-structured validation process because you have got a lot of pairs of eyes looking at a lot of different trends, and hopefully coming to sensible conclusions. If you have a small team with a large number of pots of reserves, having a really structured validation process, that you set up out of cycle, where one or two people can manage the diagnostics and outputs can be really powerful. This is because it means that you are not stretched looking a multiple reserving triangles and reserve values calculated under multiple methodologies during the period of the reserving cycle. However, it is worth adding that you may need some initial resource from elsewhere within the business (e.g. the claims team) to get the process off the ground and embedded within the business.
Mrs Hobern: A validation framework is appropriate for all companies, but they won’t necessarily look the same for each one. It is about designing the process that works best for your company, based on the characteristics of the organisation.
Will IFRS 17 make machine learning more or less applicable?
Mr Vijay: I do not think machine learning is the answer to all the IFRS 17 challenges. Given the amount of available data, and the detailed level of granularity at which assumptions will be set under IFRS 17 I would expect to see an increase in the use of machine learning. However, I expect that a corresponding increase in the quality of data and technology will also be required. This is why many organisations are combining IFRS 17 with wider transformation and data analytics programs.
How do we ensure that validation does not become a tick-box exercise?
Mrs Hobern: To avoid this happening the validation process and governance should be an integral part of the running of the business, in particular it should focus on the material assumptions within the reserving process.
This sessional meeting discussion relates to the paper by the IFoA Towards the Optimal Reserving Process (TORP) Working Party, which was presented at the IFoA sessional event held on Tuesday 30 November 2021.
Mr W. T. Diffey, F.I.A.: Today’s sessional event ‘Your reserves may be best estimate but are they valid?’ is presented by members of the ‘Towards the Optimal Reserving Process’ (TORP) Working Party.
I chair the Working Party, which has been in existence since 2012. It brings together actuaries, with a wide range of industry and consulting experience, to focus on the reserving process.
The objectives of TORP are firstly, to investigate common practice and determine emerging trends in the reserving process, and secondly, to communicate this information to the wider profession. Currently, TORP has sixteen members across personal and commercial lines, with representation from the UK (including Lloyds Market) and international markets.
I would like to introduce the presenters, starting with myself. I am William Diffey, the Chief Actuarial Officer at Assurant for Europe. I also have oversight of Reserving for Assurant’s Asia Pacific region.
I am joined by Al Lauder who joined DARAG as Group Chief Actuary in May 2021 after almost ten years at Barbican Syndicate and at Arch Managing Agency.
I am also joined by Ed Harrison. Ed is a Senior Consultant at LCP and advises firms on a wide range of actuarial matters, including reserving, solvency II and risk management.
We are also joined by Arun Vijay. He is a qualified actuary and manager at Deloitte with experience in reserving, capital and solvency management, and IFRS 17.
Finally, I am joined by Laura Hobern. Laura is an LCP Principal with over 15 years’ experience. She enjoys working with clients and, having worked in-house for re-insurers and insurers. She brings experience of reserving, pricing, portfolio analysis and transaction support.
Today’s discussion can be split into two parts. Firstly, Laura (Hobern) and Ed (Harrison) will focus on the current state and techniques of reserving validation and how these can be brought together into an overarching framework. Secondly, Arun (Vijay) and Al (Lauder) will discuss how the validation process will need to be adapted to allow for IFRS17 and the advancement of machine learning. This includes a case study which looks at the application of machine learning to a subset of London Market data.
I will now hand over to Laura (Hobern) who is going to introduce the framework.
Mrs L. J. Hobern, F.I.A.: The aim of the reserving validation process is to ensure that the results from the reserving process are reasonable, unbiased and include all available information. This brings enhanced governance and increased confidence in the results. In particular, it ensures that there is a thorough understanding of what the estimates represent and any associated uncertainty. Increasingly, automated validation techniques are being used as part of the process to highlight where reserves may be invalid.
During the core reserving analysis, the validation process may identify emerging trends that you might need to allow for, identify the material assumptions and how sensitive the results are to those assumptions, and also highlight areas of concern that require additional analysis.
Outside of the core reserving analysis, the validation process can be used to support the wider business, for example to provide experience analysis to feed into the pricing process, or to support internal queries or external queries from the regulator.
One of the findings of our research was that organisations’ validation processes are well developed but have been built up over time in a stepwise manner. A key recommendation from our paper is to introduce a reserving validation framework that encompasses all of these individual processes.
I will now hand over to Ed (Harrison), who will cover current validation approaches and the suggested framework in more detail.
Mr E. Harrison, F.I.A.: I have picked out six key issues from across our review of existing techniques and I will discuss how to bring these together into an overarching framework.
The advantage of having a framework is that the validation process becomes more efficient. The first two issues are 1) automation and 2) diagnostics and dashboards. These are key to keeping the validation process streamlined. In terms of automation, it is important that the A (Actual) versus E (Expected) process, and the stress and scenario testing process, can be repeated quickly as available information changes, or that processes can be easily adapted to provide new insights.
In terms of diagnostics and dashboards, a robust reserving validation framework is going to have a lot of output and contain a large amount of data. As actuaries we want to spend our time looking at the central elements of the reserving process, i.e., how claims are developing over time and how this compares to the underlying assumptions we have made. Time should be spent, when setting up the framework, to ensure the output of the validation process is targeted and proportionate. For both these issues, technology advancement brings an opportunity to streamline the validation activity.
The third issue is to ensure a joined-up approach across the business. If we take the A versus E analysis as an example, we note that it is part of reserving validation, capital requirement validation, Own risk and solvency assessment (ORSA) and the business planning process. So, we would recommend the approach of having consistent risk profile distributions, and stress and scenario tests, that are applied across all of these processes. This will help give the board a joined-up picture of the reserving risk.
The fourth area is projection versus validation granularity. At the moment, the majority of reserve validation is done at the same level of granularity, or in the same cohorts, as the underlying reserving projections. Clearly, it is important to do validation on the same triangles, the same data sets, and the same methods on which you are projecting, but in our view that should only be one part of the validation process. We should also look to carry out validation at more aggregate and more granular levels. Again, this can be facilitated in future by technology advancements.
In terms of the more aggregate levels of granularity, let us consider an example where a class of business has five perils and the claims development within each peril is volatile. As a result of the volatility, a cautious reserve is set for each peril which looks like a sensible thing to do. However, if you then aggregate the reserves to the total class level and look at the implied total loss ratios, you might see that at an overall level the reserves are overly cautious. This is a really powerful example of where performing validation at higher levels of aggregation can really help you police assumptions at a more granular level.
By reserving at a more granular level we will potentially be able to analyse new reserving drivers, as insurance processes capture more data and this data can be analysed in real time. This presents an opportunity to enhance our reserving validation. For example, in the past, if any anomalies were identified the approach would be to take a view on whether the issue is likely to persist in the future, to discuss it with claims teams and underwriters and maybe provide some subjective, qualitative overlay to the reserves.
In the future, we will have the opportunity to see live where our assumptions on homogeneity, are perhaps not as we thought, then quickly change the level of granularity that we project at and establish a reserve that takes into account all available information.
Issue five is the timing of validation activity. We are seeing that the reserving processes is becoming more compressed on two fronts. Firstly, timescales are being reduced, and secondly more information is being requested. When setting up a validation framework, it is a good opportunity to look at whether aspects of that validation can be done out of cycle, either as a precursor to the reserving process to help you know where to look, or as a post reserve review to work out where you can improve your processes.
The sixth and final issue is to continue to be aware of the limitations within both the reserving and the validation process. A log of the material limitations should be maintained as part of the organisation’s validation framework and resource plans should consider where any of these limitations could be reduced.
I am now going to hand over to Arun (Vijay) who is going to talk us through some of the challenges that IFRS 17 might bring to the reserving validation process.
Mr A. Vijay F.I.A.: As you are aware, IFRS 17 is the new accounting standard that will replace IFRS 4 and has the intention of making financial statements more consistent and transparent across the world. However, in practice the standard is quite complex, and it is likely to become more challenging to interpret and validate the results. Firstly, compared to IFRS 4, IFRS 17 requires reporting at more granular levels (individual cohorts, entity level and group reporting), increased disclosure requirements and also has tighter working day deadlines. Secondly, the work flow is more predictable under IFRS 4 with straight line hand-offs from data teams to actuarial and then onto finance, but under IFRS 17 there are multiple complex hand-offs and dependencies, which will need to be managed in a careful and coordinated manner. In combination, the above points will lead to an increase in the amount of validation that will need to be done out of cycle.
As well as the increase in granularity, we will have more complexity in the amount of data that we need for financial reporting and an increase in the amount of information that needs to be provided in the disclosures. In particular, A versus E will be a feature of the disclosures. When this is combined with the requirement to discount the reserves it will be difficult to split out the A versus E into a separate ‘discounting impact’ and a separate ‘assumptions impact’. The impact is not linear and any explanation will likely be cumbersome.
Additionally, IFRS 17 brings the requirement for a risk adjustment and the methodology for this has to be disclosed. Again, this will bring increased complexity to the reserving process and the associated explanations in the disclosures.
Naturally, we should expect to have to deal with an increased amount of questions from the investors, auditors, and board members. Auditors will already be engaged and it is certain that their scope of work will be increased (e.g. the capital model could come into scope if it produces values for the risk adjustment). All of this points to an increase in the amount of validation work that will need to be carried out and it will be important to automate as many of the controls as possible.
As I mentioned, we have non-linear effects under IFRS 17, so it is likely that we will see an increase in the use of stress testing, scenario testing, and sensitivity testing assumptions. Some of these detailed validations should be aided by automation.
These influences are only expected to play out in the long term. In the near to medium term, there are likely to be incremental changes to current validation techniques. As companies continue to invest in their data and analytical capabilities, validation approaches will develop in tandem. This leads us onto the section on machine learning by Al (Lauder).
Mr A. J. R. Lauder, F.I.A.: This section is about how the advancement of machine learning will impact on the reserving process and the underlying data structures. It uses a case study from the London Market involving a database of actuarial risk code estimates. Anyone who has worked in the London Market will have come across the standard benchmarking exercise, where annually you take the risk code data and do as many projections on the risk codes as you can, in order to create benchmarks for setting loss ratios. We took the database of ultimates spanning up to five years and tested this against an output of Machine Learning (ML) methods applied to the risk code triangles. The ML used metadata from the risk code mapping tables to model the ultimates at the level of business class. The model was trained on data up to 2016, and then error terms were calculated in two different ways. The first was to track errors for both actuary (i.e. human analysis) and ML for the fully developed years only. Then, secondly to track errors for both actuary and ML across the whole of the reserving triangle which has the benefit that you include much more data.
In conclusion, we determined the ML was as good as the actuary in later development periods, but better than the actuary in the early development periods. It was the same for both ways of measuring error. We concluded that the ML performed better in early development periods. Because it was looking across all triangles and so had more of a total market view and more data to smooth across, whereas the human will be very focused on the triangle (e.g. motor theft) they are projecting which will have very limited data in the early development periods.
Interestingly, increasing the weight that the ML model placed on total market data leads to worse performance, as compared to the actuary in later development periods.
This exercise does not use the full power of machine learning. It was only applied to aggregate data, whereas machine learning can be applied to much more granular data and bigger data sets. Given the increase in granularity of available date under IFRS 17 the demand for machine learning is only going to increase over the next 10 years. To realise the benefits, data quality and machine learning methodologies will also need to improve.
Another implication is that you will need these skillsets within your team. Even if you are using external providers, you need internal understanding of how to calibrate the models, how they work and to be able to explain the results to the board and to the regulator.
The second part of this section is around the applications of ML and Artificial Intelligence (AI) within the context of the role of reserving actuaries. Firstly, the actuary will need to become familiar with handling unstructured data e.g. tools that can read and transform claims files or other text files such as letters or slips from brokers into a dataset that can be analysed. One such technique is optical character recognition (OCR) which can lift certain fields out of a letter for example. The efficiency benefits of this technology are huge and ultimately the actuary will be able to make use of the increased amount of data.
The implications of the increase in data available for analysis is huge. If we combine the ability to automatically reserve at a lower level of segmentation with the ability to segment the portfolio in lots of different ways, based on consistently coded and accurate data, we may well find that the reserving actuary plays a more fundamental role in the performance management of the portfolio.
Another very rich source of data is the claims files. These can be read into massive databases unlocking vast treasure troves of data, which could bear fruit in the future in terms of predictive modelling. Going forward actuaries with IT skills will play a bigger part in the key business decisions of organisations. In summary, the applications of data science and AI are very broad and deep and actuaries are perfectly placed within insurance businesses to lead the transformation of how these technologies are implemented and the businesses move to the state where the quality of data they hold is much higher than it is today.
Mr Diffey: In our view, machine learning is a powerful tool, but it is hard to apply without improvements in data quality and governance. To conclude we have got more sophisticated and powerful tools in data analysis and machine learning to help meet the challenges ahead. In particular, machine learning could bring a step change in analysis of data and trends compared to the current combination of human processing on available data. If these tools are applied well, they should help actuaries meet the upcoming challenges. Knowing how and when to use the tools will be a challenge and that is why we think it is important that a structured validation framework needs to be embedded within the reserving process. Without this structure, our concern is that the output of the validation process will become fragmented and difficult to interpret and understand. This will lead to boards, regulators and practitioners alike losing confidence in the reported reserves regardless of whether they are then proven right by subsequent development. That leads us to our conclusion, which is, in a way, the title of the paper: “Your reserves may well be right, but they may not be valid, at least as best estimate”.
At this point I’ll take questions.
Are the machine learning techniques applicable outside of general insurance?
Mr Lauder: Yes, in fact machine learning may work better outside general insurance, as the data sets tend to be of better quality. Of course the reserving techniques and validation focus may be different but I think this reinforces the point that the validation needs to be embedded within the specifics of each organisation’s business.
You referred to non-linear impacts under IFRS 17. Can you give some examples?
Mr Vijay: I will give you an example of a recent impact on bodily injury claims. As well as a change in the level of underlying claims due to Covid, we had delays in claims being settled due to disruption of the legal process. Due to IFRS 17 operating on a cash-flow basis (unlike IFRS 4) the impacts did not accumulate in a simple additive manner. This was further compounded by the requirement to discount under IFRS 17.
You mentioned the need to use consistent Stress and Scenario tests for reserve modelling and for ORSA. Could you give any examples of what these may be?
Mr Harrison: If you look at the teams in isolation you have a reserving team who cluster their tests around the best estimate level. You have a capital team who look at tests a more extreme level with a particular focus on the 1 in 200 level. Then you have the risk team, who as part of the ORSA, look at a much broader range of tests. Across the business the tests often come from different models. It is only when the results get to board level, often through different routes, that the inconsistencies become apparent. My suggestion is that these inconsistencies are addressed in the validation framework so a more joined up approach across the business can be achieved.
Do you think it is possible for reserves to be valid, but wrong?
Mrs Hobern: It depends exactly what we mean by wrong, but assuming we mean that the reserves that we set now are not ultimately the same number as the claims that are paid, then I think it is certain that they are going to be wrong. The message behind the question ‘are they valid?’ is firstly that we should aim to incorporate all available information and secondly that our assumptions, analysis and conclusions are challenged to ensure that confidence in the results is high and that any limitations or uncertainty is well explained.
Mr Harrison: I would like to add a concrete example to that. When determining the unearned premium reserve for a motor portfolio in December 2019 we were unaware of the upcoming Covid pandemic. Our best estimates for motor unearned reserves in December 2019 were valid in that they took into account all of the information that was there. These reserves would typically have been justifiable to the board and to the regulator, but they would have been undoubtedly wrong due to the Covid impact.
Are validation frameworks only applicable for large companies? How can they be applied within small to medium sized companies?
Mr Harrison: I think it is the opposite, and that if you have a large team then you could get away with having a non-structured validation process because you have got a lot of pairs of eyes looking at a lot of different trends, and hopefully coming to sensible conclusions. If you have a small team with a large number of pots of reserves, having a really structured validation process, that you set up out of cycle, where one or two people can manage the diagnostics and outputs can be really powerful. This is because it means that you are not stretched looking a multiple reserving triangles and reserve values calculated under multiple methodologies during the period of the reserving cycle. However, it is worth adding that you may need some initial resource from elsewhere within the business (e.g. the claims team) to get the process off the ground and embedded within the business.
Mrs Hobern: A validation framework is appropriate for all companies, but they won’t necessarily look the same for each one. It is about designing the process that works best for your company, based on the characteristics of the organisation.
Will IFRS 17 make machine learning more or less applicable?
Mr Vijay: I do not think machine learning is the answer to all the IFRS 17 challenges. Given the amount of available data, and the detailed level of granularity at which assumptions will be set under IFRS 17 I would expect to see an increase in the use of machine learning. However, I expect that a corresponding increase in the quality of data and technology will also be required. This is why many organisations are combining IFRS 17 with wider transformation and data analytics programs.
How do we ensure that validation does not become a tick-box exercise?
Mrs Hobern: To avoid this happening the validation process and governance should be an integral part of the running of the business, in particular it should focus on the material assumptions within the reserving process.