3.1 Introduction
Money laundering involves the transfer of illegally obtained money through legitimate channels so that its original source cannot be traced.Footnote 1 The United Nations estimates that the amount of money laundered each year represents 2–5 per cent of global gross domestic product (GDP); however, due to the surreptitious nature of money laundering, the total could be much higher.Footnote 2 Money launderers conceal the source, possession, or use of funds through a range of methods of varying sophistication, often involving multiple individuals or institutions across several jurisdictions to exploit gaps in the financial economy.Footnote 3
As major facilitators in the global movement of money, banks carry a high level of responsibility for protecting the integrity of the financial system by preventing and obstructing illicit transactions. Many of the financial products and services they offer are specifically associated with money laundering risks. To ensure regulatory compliance in the fight against financial crime, banks must develop artificial intelligence (AI) about emerging money-laundering processes and create systems that effectively target suspicious behaviour.Footnote 4
‘Smart’ regulation in the financial industry requires the development and deployment of new strategies and methodologies. Technology can assist regulators, supervisors, and regulated entities by alleviating the existing challenges of anti-money laundering (AML) initiatives. In particular, the use of AI can speed up risk identification and enhance the monitoring of suspicious activity by acquiring, processing, and analysing data rapidly, efficiently, and cost-effectively. It thus has the potential to facilitate improved compliance with domestic AML legal regimes. While the full implications of emerging technologies remain largely unknown, banks would be well advised to evaluate the capabilities, risks, and limitations of AI – as well as the associated ethical considerations.
This chapter will evaluate compliance with the Financial Action Task Force (FATF) global standards for AML,Footnote 5 noting that banks continue to be sanctioned for non-compliance with AML standards. The chapter will then discuss the concept of AI, which can be leveraged by banks to identify, assess, monitor, and manage money laundering risks.Footnote 6 Next, the chapter will examine the deficiencies in the traditional rule-based systems and the FATF’s move to a more risk-oriented approach, which allows banks to concentrate their resources where the risks are particularly high.Footnote 7 Following this, the chapter will consider the potential for AI to enhance the efficiency and effectiveness of AML systems used by banks, as well as the challenges posed by its introduction. Finally, the chapter will offer some concluding thoughts.
3.2 Enforcement and Detection: The Cost of Non-compliance
The FATF sets global standards for AML, with more than 200 jurisdictions committed to implementing its recommendations.Footnote 8 It monitors and assesses how well countries fulfil their commitment through legal, regulatory, and operational measures to combat money laundering (as well as terrorist financing and other related threats).Footnote 9 Pursuant to the FATF recommendations, banks must employ customer due diligence (CDD) measures.Footnote 10 CDD involves the identification and verification of customer identity through the use of other sources and data. Banks should conduct CDD for both new and existing business relationships.Footnote 11 They have a duty to monitor transactions and, where there are reasonable grounds to suspect criminal activity, report them to the relevant financial intelligence agency.Footnote 12 Banks must conduct their operations in ways that withstand the scrutiny of customers, shareholders, governments, and regulators. There are considerable consequences for falling short of AML standards.
In a 2021 report, AUSTRAC, Australia’s financial intelligence agency, assessed the nature and extent of the money laundering risk faced by Australia’s major banks as ‘high’. The report highlighted the consequences for customers, the Australian financial system, and the community at large.Footnote 13 It drew attention to impacts on the banking sector – including financial losses, increased compliance costs, lower share prices, and increased risk of legal action from non-compliance – as well as reputational impacts on Australia’s international economic security.Footnote 14
In this climate of heightened regulatory oversight, banks continue to be sanctioned for failing to maintain sufficient AML controls. In 2009, Credit Suisse Group was fined US$536 million for illegally removing material information, such as customer names and bank names, so that wire transfers would pass undetected through the filters at US banks. The violations were conducted on behalf of Credit Suisse customers in Iran, Sudan, and other sanctioned countries, allowing them to move hundreds of millions of dollars through the US financial system.Footnote 15 Also in 2009, Lloyds Banking Group was fined US$350 million after it deliberately falsified customer information in payment records, ‘repairing’ transfers so that they would not be detected by US banks.Footnote 16 In 2012, US authorities fined HSBC US$1.9 billion in a money laundering settlement.Footnote 17 That same year, the ING Bank group was fined US$619 million for allowing money launderers to illegally move billions of dollars through the US banking system.Footnote 18 The Commonwealth Bank of Australia was fined A$700 million in 2017 after it failed to comply with AML monitoring requirements and failed to report suspicious matters worth tens of millions of dollars.Footnote 19 Even after becoming aware of suspected money laundering, the bank failed to meet its CDD obligations while continuing to conduct business with suspicious customers.Footnote 20 In 2019, fifty-eight AML-related fines were issued worldwide, totalling US$8.14 billion – more than double the amount for the previous year.Footnote 21 Westpac Bank recently agreed to pay A$1.3 billion fine – an Australian record – for violating the Anti-Money Laundering and Counter-Terrorism Financing Act 2006. Westpac had failed to properly report almost 20 million international fund transfers, amounting to over A$11 billion, to AUSTRAC, thereby exposing Australia’s financial system to criminal misuse.Footnote 22 In 2020, Citigroup agreed to pay US$400 million fine after engaging in what US regulators called ‘unsafe and unsound banking practices’, including with regard to money laundering.Footnote 23 The bank had previously agreed to a US$97.4 million settlement after ‘failing to safeguard its systems from being infiltrated by drug money and other illicit funds’.Footnote 24 The severity of these fines reflects the fact that non-compliance with AML measures in the banking industry is unacceptable to regulators.Footnote 25 More recently, AUSTRAC accepted an enforceable undertaking from National Australia Bank to improve the bank’s systems, controls, and record keeping so that they are compliant with AML laws.Footnote 26
The pressure on banks comes not only from increased regulatory requirements, but also from a marketplace that is increasingly concerned with financial integrity and reputational risks.Footnote 27 A bank’s failure to maintain adequate systems may have consequences for its share price and its customer base. Citigroup, for example, was fined in 2004 for failing to detect and investigate suspicious transactions. The bank admitted to regulators that it had ‘failed to establish a culture that ensured ongoing compliance with laws and regulations’. Within one week of the announcement by regulators, the value of Citigroup shares had declined by 2.75 per cent.Footnote 28
It is, therefore, in the best interests of the banks themselves to manage risks effectively and to ensure full compliance with the domestic legislation that implements the FATF recommendations, including by retaining senior compliance staff.Footnote 29 Despite the high costs involved, banks have largely expressed a strong commitment to improving their risk management systems to protect their own integrity and that of the financial system – as well as to avoid heavy penalties, such as those detailed above.Footnote 30 Yet, while banks continue to invest in their capabilities in this area, they also continue to attract fines. This suggests that the current systems are inadequate for combating financial crime.
The current systems rely on models that are largely speculative and rapidly outdated.Footnote 31 Fraud patterns change constantly to keep up with technological advancements, making it difficult to distinguish between money laundering and legitimate transactions.Footnote 32 But while emerging technologies can be exploited for criminal activity, they also have the potential to thwart it.Footnote 33 AI has proven effective in improving operational efficiency and predictive accuracy in a range of fields, while also reducing operational costs.Footnote 34 Already, some banks have begun using AI to automate data in order to detect suspicious transactions. Indeed, AI could revolutionise the banking industry, including by improving the banking experience in multiple ways.Footnote 35
3.3 Leveraging AI for AML
AI simulates human thought processes through a set of theories and computerised algorithms that execute activities that would normally require human intellect.Footnote 36 It is, in short, the ability of a computer to mimic the capabilities of the human mind. The technology uses predictive analytics through pattern recognition with differing degrees of autonomy. Machine learning is one of the most effective forms of AI for AML purposes.Footnote 37 It can use computational techniques to gain insights from data, recognise patterns, and create algorithms to execute tasks – all without explicit programming.Footnote 38 Standard programming, in contrast, operates by specific rules that are developed to make inferences and produce outcomes based on input data.Footnote 39 Machine learning initiatives allow AML systems to conduct risk assessments with varying levels of independence from human intervention.Footnote 40 Deep learning, for example, is a form of machine learning that builds an artificial neural network by conducting repeated tasks, allowing it to improve the outcome continuously and solve complex problems by adapting to environmental changes.Footnote 41 Although there are many machine learning techniques, AI has four main capabilities for AML purposes: anomaly detection, suspicious behaviour monitoring, cognitive capabilities, and automatic robotic processing.Footnote 42 The effectiveness of these capabilities depends largely on processing power, the variability of data, and the quality of data, thus requiring some degree of human expertise.
The processes involved in AI can be broadly grouped into supervised and unsupervised techniques. Supervised techniques use algorithms to learn from a training set of data, allowing new data to be classified into different categories. Unsupervised techniques, which often operate without training data, use algorithms to separate data into clusters that hold unique characteristics. Researchers maintain that algorithmic processes have the potential to detect money laundering by classifying financial transactions at a larger scale than is currently possible – and with greater accuracy and improved cost-efficiency.Footnote 43
3.4 The Shift to a Risk-Based Approach
One of the most significant obstacles for banks seeking to meet their compliance obligations is the difficulty of appropriately detecting, analysing, and mitigating money laundering risks – particularly during CDD and when monitoring transactions.Footnote 44 Currently, transaction monitoring and filtering technology is primarily rule-based, meaning that it is relatively simplistic and predominantly focused on automated and predetermined risk factors.Footnote 45 The system operates as a ‘decision tree’, in which identified outliers generate alerts that require investigation by other parties. Thus, when a suspicious activity is flagged, a compliance officer must investigate the alert and, if appropriate, generate a suspicious matter report.Footnote 46
In order to minimise the costs and time required to investigate suspicious transactions, it is essential to detect them accurately at the first instance.Footnote 47 In rule-based systems, the task is made all the more difficult by the high false positive rate of the alerts, which is believed to be above 98 per cent.Footnote 48 If risk assessment in low-risk situations is overly strict, unmanageable numbers of false positive identifications can cause significant operational costs.Footnote 49 Conversely, if risk assessments are too lax, illicit transactions can slip through unnoticed.Footnote 50 These static reporting processes make it difficult to analyse increasingly large volumes of data, making them impractical on the scale required by banks. It has thus become necessary for banks to choose between the efficiency and the effectiveness of their AML processes.
Moreover, the rule-based systems rely on human-defined criteria and thresholds that are easy for money launderers to understand and circumvent. The changing patterns of fraud make it difficult for rule-based systems and policies to maintain their effectiveness, thus allowing money laundering transactions to be misidentified as genuine.Footnote 51 AML systems are designed to detect unusual transaction patterns, rather than actual criminal behaviour. Rule-based systems thus have the potential to implicate good customers, initiate criminal investigations against them, and thereby damage customer relationships – all without disrupting actual money laundering activities. This is because the systems were designed for a relatively slow-moving fraud environment in which patterns would eventually emerge and be identified and then incorporated into fraud detection systems. Today, criminal organisations are themselves leveraging evolving technologies to intrude into organisational systems and proceed undetected.Footnote 52 For example, AI allows criminals to use online banking and other electronic payment methods to move illicit funds across borders through the production of bots and false identities that circumnavigate AML systems.Footnote 53
According to the FATF, implementing a risk-based approach is the ‘cornerstone of an effective AML/CFT system and is essential to properly managing risks’.Footnote 54 Yet many jurisdictions continue to use antiquated rule-based systems, leading to defensive compliance. To keep pace with modern crime and the increasing volume and velocity of data, banks need a faster and more agile approach to the detection of money laundering. They should reconsider their AML strategies and evolve from traditional rule-based systems to more sophisticated risk-based AI solutions. By leveraging AI, banks can take a proactive and preventive approach to fighting financial crime.Footnote 55
3.5 Advantages and Challenges
3.5.1 Advantages
New technologies are key to improving the management of regulatory risks. Banks have begun exploring the use of AI to assist analysts in what has traditionally been a manually intensive task to improve the performance of AML processes.Footnote 56 In 2018, US government agencies issued a joint statement encouraging banks to use innovative methods, including AI, to further efforts to protect the integrity of the financial system against illicit financial activity.Footnote 57 The United Kingdom Financial Conduct Authority has supported a series of public workshops aimed at encouraging banks to experiment with novel technologies to improve the detection of financial crimes.Footnote 58 AUSTRAC has invested in data analysis and advanced analytics to assist in the investigation of suspicious activity.Footnote 59 Indeed, developments in AI offer an opportunity to fundamentally transform the operations of banks, equipping them to combat modern threats to the integrity of the financial system.Footnote 60 And, where AI reaches the same conclusions as traditional analytical models, this can confirm the accuracy of such assessments, ultimately increasing the safeguards available to supervisors.Footnote 61 Although machine learning remains relatively underutilised in the area of AML, it offers the potential to greatly enhance the efficiency and effectiveness of existing systems.Footnote 62
3.5.1.1 Improved Efficiency
Incorporating AI in AML procedures can reduce the occurrence of false positives and increase the identification of true positives. In Singapore, the United Overseas Bank has already piloted machine learning to enhance its AML surveillance by implementing an AML ‘suite’ that includes know-your-customer (KYC), transaction monitoring, name screening, and payment screening processes.Footnote 63 The suite provides an additional layer of scrutiny that leverages machine learning models over traditional rule-based monitoring systems, resulting in real benefits. In relation to transaction monitoring, the recognition of unknown suspicious patterns saw an increase of 5 per cent in true positives and a decrease of 40 per cent in false positives. There was a more than 50 per cent reduction in false positive findings in relation to name screening.Footnote 64
AI has the capability to analyse vast volumes of data, drawing on an increased number of variables. This means that the quality of the analysis is enhanced and the results obtained are more precise.Footnote 65 At the same time, utilising AI in AML can increase productivity by reducing staff work time by 30 per cent.Footnote 66 By combining transactional data with other information, such as customer profile data, it is possible to investigate AML risks within days. In contrast, traditional methods that review isolated accounts often require months of analysis. Additionally, banks can use AI to facilitate the live monitoring of AML standards, which can also improve governance, auditability, and accountability.Footnote 67 Overall, the use of machine learning has resulted in a 40 per cent increase in operational efficiency, reinforcing the notion that investment in AI initiatives may have positive implications for the reliability of AML processes.Footnote 68
3.5.1.2 Reduced Compliance Costs
By leveraging AI, banks have an opportunity to reduce costs and prioritise human resources in complex areas of AML.Footnote 69 It has been estimated that incorporating AI in AML compliance procedures could save the global banking industry more than US$1 trillion by 2030Footnote 70 and reduce its costs by 22 per cent over the next twelve years.Footnote 71 The opportunities for cost reduction and improved productivity and risk management offer convincing incentives for banks to engage AI and machine learning to achieve greater profitability.Footnote 72 With increased profits, banks could further improve the accuracy of AML systems and, in the process, advance the goals of AML.Footnote 73
3.5.1.3 Increased Inclusiveness
Digital tools have the potential to increase financial inclusion, promoting more equitable access to the formal financial sector.Footnote 74 Customers with less reliable forms of identification – including First Nations peoples and refugees – can access banking services through solutions such as behavioural analytics, which reduces the burden of verification to one instance of customer onboarding. Utilising AI makes banks less reliant on traditional CDD, offering enhanced monitoring capabilities that can be used to manage verification data.Footnote 75
3.5.2 Challenges
Despite the growing recognition of the potential for AI to improve the accuracy, speed, and cost-effectiveness of AML processes, banks remain slow to adopt these technologies due to the regulatory and operational challenges involved.Footnote 76 Significant hurdles to wider adoption persist and these may continue to stifle innovations in AML compliance.
3.5.2.1 Interpretation
The difficulty of interpreting and explaining the outcomes derived from AI technologies is among the main barriers to securing increased support for these tools.Footnote 77 The Basel Committee on Banking Supervision has stated that, in order to replicate models, organisations should be able to demonstrate developmental evidence of theoretical construction, behavioural characteristics, and key assumptions; the types and use of input data; specified mathematical calculations; and code-writing language and protocols.Footnote 78 Yet artificial neural networks may comprise hundreds of millions of connections, each contributing in some small way to the outcomes produced.Footnote 79 Indeed, as technological models become increasingly complex, the inner workings of the algorithms become more obscure and difficult to decode, creating ‘black boxes’ in decision-making.Footnote 80
In the European Union, the increased volume of data processing led to the adoption of the General Data Protection Regulation (GDPR) in 2016.Footnote 81 The GDPR aims to ensure that the data of individuals is protected – particularly in relation to AML procedures, which often collect highly personal data.Footnote 82 With respect to AI and machine learning, Recital 71 specifies that there is a right to obtain an explanation of the decision reached after algorithmic assessment. Because regulated entities remain responsible for the technical details of AI solutions, fears persist concerning accountability and interpretability where technologies cannot offer robust transparency.Footnote 83 While the GDPR expects that internal compliance teams will understand and defend the algorithms utilised by digital tools, compliance officers working in banks require expertise and resources to do so. It may take a long period of time for even the most technologically literate of supervisors to adjust to new regulatory practices.Footnote 84 Efforts to improve the interpretation of AI and machine learning are vital if banks are to enhance risk management and earn the trust of supervisors, regulators, and the public.
3.5.2.2 Data Quality
The data utilised to train and manage AI systems must be of high quality.Footnote 85 Machine learning models are not self-operating; they require human intervention to ensure their optimal functioning.Footnote 86 In other words, machines cannot think for themselves. Rather, they merely execute and learn from their encoded programming.Footnote 87 Since machine learning is only as good as its input, it is crucial that the models used are based on relevant and diverse data.Footnote 88 Where money-laundering transactions have not previously been identified by the system, it may be difficult for machine learning to detect future instances.Footnote 89 Moreover, false positives would be learned into the system if the training data included them.Footnote 90 Therefore, it is essential that data quality is monitored on an ongoing basis to ensure thorough data analysis and regular data cleansing. This serves to highlight the vital importance of vigilant human collaboration in the technological implementation of AI to ensure that models are well maintained and remain effective.Footnote 91
3.5.2.3 Collaboration
The inexplicable nature of AI, especially machine learning processes, has sparked concerns that are exacerbated by the lack of data harmonisation between actors and users.Footnote 92 Currently, customer privacy rules and information security considerations prevent banks from warning each other about potentially suspicious activity involving their customers. While some customers rely on a single financial services provider for all their banking requirements, criminals often avoid detection by moving illicit proceeds through numerous financial intermediaries.Footnote 93 The FATF has reported that intricate schemes involving complex transaction patterns are difficult and sometimes impossible to detect without information from counterparty banks or other banks providing services to the same customer.Footnote 94 Nevertheless, the FATF’s rules to prevent ‘tipping off’ support the objective of protecting the confidentiality of criminal investigations.Footnote 95
While data standardisation and integrated reporting strategies simplify regulatory reporting processes, they also raise various legal, practical, and competition issues.Footnote 96 It is likely that the capacity of banks to model will continue to be limited by the financial transactions that they themselves process.Footnote 97 Moreover, where information is unavailable across multiple entities, some technological tools may not be cost-effective.Footnote 98 On the other hand, stronger collaboration may introduce the risk of data being exploited on a large scale.Footnote 99 There is as yet no ‘model template’ in relation to private sector information sharing that complies with AML and data protection and privacy requirements. However, information sharing initiatives are being explored and should be considered in targeted AI policy developments.
3.5.2.4 Privacy
Due to the interconnectedness of banks and third party service providers, cyber risks are heightened when tools such as AI and machine learning are used and stored in cloud platforms. Concentrating digital solutions might exacerbate these risks.Footnote 100 These regulatory challenges reinforce the desire to maintain human-based supervisory processes so that digital tools are not replacements but rather aids in the enhancement of regulatory systems.Footnote 101 Article 22 of the GDPR provides that subjects of data analysis have the right not to be subject to a decision with legal or significant consequences ‘based solely on automated processing’.Footnote 102 The FATF also maintains that the adoption of AI technology in AML procedures requires human collaboration, due to particular concerns that technology is incapable of identifying emerging issues such as regional inequalities.Footnote 103
3.5.2.5 Bias
Although algorithmic decision-making may appear to offer an objective alternative to human subjectivity, many AI algorithms replicate the conscious and unconscious biases of their programmers.Footnote 104 This may lead to unfairly targeting the financial activities of certain individuals or entities, or it may produce risk profiles that deny certain persons access to financial services. For example, AI and machine learning are increasingly being used in relation to KYC models.Footnote 105 Recommendation 10 of the FATF standards requires banks to monitor both new and existing customers to ensure that their transactions are legitimate.Footnote 106 Without the incorporation of AI, existing KYC processes are typically costly and labour-intensive.Footnote 107 Utilising AI can help evaluate the legitimacy of customer documentation and calculate the risks for banks where applications may seem to be fake.Footnote 108 The data input team should ensure that it does not unintentionally encode systemic bias into the models by using attributes such as employment status or net worth.Footnote 109 Transactional monitoring is less vulnerable to such biases, as it does not involve personal data such as gender, race, and religion. Nonetheless, AI and machine learning algorithms could implicitly correlate those indicators based on characteristics such as geographical location.Footnote 110 If not implemented responsibly, AI has the potential to exacerbate the financial exclusion of certain populations for cultural, political, or other reasons.Footnote 111 The use of these digital tools may thus lead to unintended discrimination.Footnote 112 Such concerns are heightened by the fact that the correlations are neither explicit nor transparent.Footnote 113 Therefore, regulators must remain mindful of the need to limit bias, ensure fairness, and maintain controls. The evolving field of discrimination-aware data mining may assist the decision-making processes that flow through information technology to ensure that they are not affected on unjust or illegitimate grounds.Footnote 114 It does this by recognising statistical imbalances in data sets and leveraging background information about discrimination-indexed features to identify ‘bad’ patterns that can then be either flagged or filtered out entirely.Footnote 115
3.5.2.6 Big Data
The term ‘big data’ refers to large, complex, and ever-changing data sets and the technological techniques that are relevant to their analysis.Footnote 116 Policymakers and technical organisations have expressed significant concerns over the potential misuse of data.Footnote 117 There are also apprehensions that the lack of clarity around how data is handled may lead to potential violations of privacy.Footnote 118 In addition, there are uncertainties surrounding the ownership of data, as well as its cross-border flow.Footnote 119 Nonetheless, the primary focus should remain on the use of big data, rather than its collection and storage, as issues pertaining to use have the potential to cause the most egregious harm.Footnote 120
3.5.2.7 Liability
The issues discussed above raise questions of liability regarding who will carry the burden of any systemic faults that result in the loss or corruption of data and related breaches of human rights.Footnote 121 While artificial agents are not human, they are not without responsibility.Footnote 122 Because it is impossible to punish machines, questions of liability are left to be determined between system operators and system providers.Footnote 123 This situation can be likened to a traffic accident in which an employee injures a pedestrian while driving the company truck. While the employer and the employee may both be liable for the injuries, the truck is not.Footnote 124 These issues enliven questions of causation. Will the use of AI and machine learning be considered a novus actus interveniens that breaks the chain of causation and prevents liability from being attributed to other actors?Footnote 125 The answer to this question will largely depend on the characteristics of artificial agents and whether they will be considered as mere tools or as agents in themselves, subject to liability for certain data breaches or losses. Despite the impact of automation processes on decision-making, doubts remain as to whether AI uses ‘mental processes of deliberation’.Footnote 126 Due to the collaborative nature of AI technology and human actors, it is generally assumed that AI is merely an instrument and that accountability will be transferred to banks and developers.Footnote 127 Therefore, where supervisors can be considered legal agents for the operation of artificial technology, they may incur liability on the basis of that agency relationship.Footnote 128 Alternatively, where system developers are negligent as far as security vulnerabilities are concerned, they may be liable for the harm caused by unauthorised users or cyber criminals who exploit these deficiencies.Footnote 129 Thus, supervisors and developers have a duty of care to ensure that they take reasonable steps to prevent harm or damage.Footnote 130 It is possible that, as a result of its continued advancement, machine learning may eventually be granted legal personhood. Rights and obligations would therefore belong to the technology itself, excusing operators and developers from liability.Footnote 131 However, this viewpoint remains highly contested on the basis that AI does not possess ‘free will’, since it is programmed by humans and has little volition of its own.Footnote 132 Banks must not underestimate the importance of these concerns. They should ensure that AI and machine learning are carefully implemented with well-designed governance in place so that risks and liabilities are not unintentionally heightened by the use of new technologies.Footnote 133 Strong checks and balances are required at all stages of the development process.Footnote 134
3.5.2.8 Costs
Banks must consider the costs of maintaining, repairing, and adapting new AI systems.Footnote 135 While AI models have the potential to improve the cost-efficiency of AML compliance, it may be difficult for banks – especially smaller institutions – to budget for high-level AI solutions.Footnote 136 Moreover, there are associated indirect costs that require firms to invest in additional funding – for example, updating existing database systems to make them compatible with new AI solutions and hiring staff with appropriate technical expertise.Footnote 137
3.5.3 Consideration
AI and machine learning have the potential to provide banks with effective tools to improve risk management and compliance with regard to AML. However, if these new technologies are not introduced with care and diligence, they could adversely affect AML systems by introducing greater burdens and risks. Some of the challenges presented by AI are similar to those posed by other technology-based solutions aimed at identifying and preventing money laundering. Machine learning, however, offers a relatively new and unique method of classifying information based on a feedback loop that enables the technology to ‘learn’ through determinations of probability.Footnote 138 Banks can thus analyse and classify information through learned anomaly detection algorithms, a technique that is more effective than traditionally programmed rule-based systems.Footnote 139 At the same time, the utilisation of AI can exacerbate the complexity and severity of the challenges inherent in AML compliance, particularly in relation to interpretation and explanation.Footnote 140 As discussed above, machine learning algorithms usually do not provide a rationale or reasoning for the outcomes they produce, making it difficult for compliance experts to validate the results and deliver clear reports to regulators.Footnote 141 This is particularly concerning for banks, where trust, transparency, and verifiability are of great importance to ensure satisfaction and regulatory confidence.Footnote 142 Nonetheless, in the current regulatory climate, it seems almost inevitable that banks will continue to leverage AI for AML compliance.
3.6 Conclusion
The traditional framework for AML compliance is largely premised on old banking models that do not adequately keep pace with the modern evolution of financial crime. Traditional rule-based monitoring systems are clearly inadequate to detect the increasingly sophisticated methods and technologically advanced strategies employed by criminals. Banks are burdened with false positives while most money laundering transactions remain unidentified, posing a significant threat to the integrity of banks and the financial system itself. Banks that do not meet their compliance obligations expose themselves to significant pecuniary losses and reputational damage.Footnote 143
The FATF has highlighted the potential of innovative technologies such as AI and machine learning to make AML measures faster, cheaper, and more effective than current monitoring processes. While rule-based algorithms remain relevant, harnessing AI and machine learning holds great promise for increasing the accuracy of risk identification and heightening its efficiency due to the large analytical capacity of these processes. While these initiatives may be costly and risky to implement, they offer an excellent return on investment for banks that seek to strengthen their internal AML regime. The implementation of AI is increasingly recognised as the next phase in the fight against financial crime.
Due to the various regulatory and operational challenges that are likely to arise, banks should approach the adoption and implementation of AI with cautious optimism. They should ensure that sophisticated AI and machine learning models can be adequately understood and explained. To achieve optimal outcomes, these technologies should operate in conjunction with human analysis, particularly in areas of high risk. However, banks should be aware that the emphasis on collaboration between analysts, investigators, and compliance officers with regard to AI technology may introduce its own legal and ethical complications relating to privacy, liability, and various unintended consequences, such as customer discrimination.
In the increasingly complex environment of financial crime and AML regulation, banks should thoroughly consider the advantages and challenges presented by AI and machine learning as they move towards the transformation of risk assessment by leveraging AI to mitigate money laundering risks.