ClickCease

Something we said? Don’t leave just yet!

For more information about latest events, news and insights, leave us your email address below.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form
Dismiss

How to implement compliance-first AI for AML

Janet Bastiman
February 4, 2026

The hesitancy around AI adoption for AML is in stark contrast to many other areas of financial services, but it is not unexpected. Financial crime compliance is one of the most highly-regulated areas in banking, payments, wealth management, insurance and gambling because of the sheer impact of money laundering on economies and societies. This year, the Napier AI / AML Index 2025-2026 found that global money laundering losses are at least $5.5 trillion. But, that these regulated firms could save as much as $183 billion a year in compliance costs by implementing AI-driven systems, while global economies could recover more than $3.3 trillion USD annually by reducing illicit flows.  

Current state of AI for AML: Micro survey

At a recent event, we asked an audience of financial crime compliance professionals to be honest about the current state of artificial intelligence (AI) for anti money-laundering (AML) and where they thought they might land on the maturity curve in the near future:

Is your organisation currently using or testing AI for AML?

Currently using and well embedded: 2  

Currently using, but early days: 9  

Currently testing, looks promising: 7  

Still planning how to test: 5  

No current plans: 3  

In 12-24 months, where do you think your organisation will be in terms of using or testing AI for AML?

Currently using and well embedded: 7

Currently using, but early days: 12

Currently testing, looks promising: 3  

Still planning how to test: 0

No current plans: 1

 

In order to crack down on financial crime and recover illicit funds, financial institutions need to be implementing AI for AML that meets requirements for data protection, transparency, fairness, and auditability. In short, they need AI that puts compliance first.  

How are regulators collaborating on AI?  

Any implementation of AI needs to be in-line with regulatory guidance or mandates, but particularly so in financial crime compliance environments.  

UK

While in some markets guidance is relatively unspecific, in the United Kingdom the Financial Conduct Authority (FCA) takes a very proactive and approach. Driven by the understanding that AI in AML needs to be centred on human-in-the-loop decision making, the FCA has several programmes that make use of industry experts to develop this approach:

Synthetic Data Expert Group

The project involves defining best practices and use cases for responsible synthetic data use in financial projects.  

Supercharged Sandbox  

In collaboration with NVIDIA, the sandbox gives firms access to better data, technical expertise and regulatory support to speed up innovation. It is open to any financial services firm looking to innovate.

AI Live testing

Part of the existing AI Lab, this testing environment is designed to support the safe and responsible deployment of AI by firms and achieve positive outcomes for UK consumers and markets.

UK Financial Conduct Authority (FCA): Public-private partnerships

Napier AI partnered with The Alan Turing Institute, Plenitude Consulting, and the FCA to build a fully synthetic dataset (based on anonymised real transactions and layered typologies) that firms can use to train new financial-crime detection algorithms in a safe environment. The project is part of a greater effort by the FCA to leverage analytics and data science to improve efforts to tackle financial crime, reducing harm and increasing trust in financial services.

Globally

A compliance first approach to AI for AML starts with collaborating with the regulator. In this way, institutions can take advantage of market-wide research, insights, and analysis that accelerates your approach to combatting financial crime. Several regulators around the world have initiatives that provide for collaborative engagements:

Monetary Authoritary of Singapore (MAS): Project MindForge

A collaborative initiative, Project MindForge to examine the risks and opportunities of generative artificial intelligence (gen AI) technology for financial services.

Bank Negara Malaysia (BNM): Discussion Paper on Artificial Intelligence in the Malaysian Financial Sector

A public consultation to which all interested parties can respond, seeking to flesh-out how Malaysia can embrace innovation while maintaining strong governance and consumer protection.

European Banking Authority (EBA): Call for Advice on new AMLA Mandates

The pan-European regulatory authority issued a public consultation paper to collate input from financial institutions on the new Proposed Regulatory Technical Standards, in response to the European Commission’s Call for advice on new AMLA mandates.  

The U.S. Department of Treasury:  Request for Information on Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector

The regulator sought comments and feedback on the uses of Artificial Intelligence (AI) in the financial services sector and the opportunities and risks presented by developments and applications of AI within the sector.

Contributing to these kinds of centralised, verified, and shared intelligence efforts is a pragmatic and effective way to focus your financial crime compliance efforts with an AI lens.  

What are the challenges to implementing AI in AML?

Many financial institutions are taking steps to test and iterate the use of AI in their anti financial-crime strategies, aiming to overcome their current challenges.

  1. Internal data restrictions

Internal restrictions may mean that surfacing compliance policies to AI models can be a challenge, inhibiting teams from teaching models their specific risk-based approach. Integrating with internal projects that explore AI applications across the entirety of the organisation can prove the value of any changes needed for financial crime compliance purposes.  

  1. Data availability and appropriate usage

Following decades of large remediations and change projects, data is not always readily available or well understood. Some firms are running data clinics to be able to capture the appropriateness of the data that they should be using and plan to make it consumable by the AML AI models.

  1. Policy and processes

Financial crime compliance is built on extensive policy documentation and translation of that into operational controls. Finding ways to automate the laborious process of checking policy documents, and flowing updates through into front-line controls has not yet been achieved by most financial crime compliance teams. Testing AI to be able to respond to compliance queries by checking legacy materials to quickly source the answers for the analysts is the goal. This may necessitate that firms rewrite policies and procedures so they apply AI; the current set often requires human intervention or customer outreach, rewriting them would allow them to obtain the information via automated sources.

  1. Explainability and transparency

Financial crime team members and leaders themselves are accountable to regulators and law enforcement for investigation decisions, necessitating a human-in-the-loop approach to AI when it comes to screening and monitoring. There is reasonable caution in terms of ensuring explainability and auditability of any AI generated alerts, or any auto-discounting. Legacy solutions in the market have fallen short in terms of transparency of their AI, with banks burnt by black-box models that failed to meet regulatory requirements for explainability.  

  1. False positive reduction

The aim for many financial crime compliance teams is to use AI for risk management and risk-based decision making, using models to learn the risk appetite of the organisation and to generate alerts or make recommendations on that basis. The risk-based approach can also be applied when it comes to false-positives, by teaching the model the acceptable ratio (without compromising on true negatives or false negatives that represent real risk). Currently this is challenging for organisations that don’t truly understand their false positives or false negatives. Leveraging AI to configure varying risk appetites, for example with low-risk customers, or to automate Quality Assurance are possible ways to reduce false positives without compromising compliance.  

  1. In-house expertise

Some organisations have placed an emphasis on in-house solutions, but developing internal models comes with significant challenges, including quality and effectiveness testing, as well as ensuring compliant and proportional use of customer data. Data science is an important skill set to develop internally, but the development, testing, training and maintenance of AML AI models is labour intensive and requires a specialist blend of data science and fincrime expertise to be compliant.  

  1. Regulatory engagement

Financial institutions often feel behind the regulatory curve when it comes to the adoption of AI, and do not yet take full advantage of innovation programmes offered by the regulator, where industry best-practice and benchmarking is shared. Firms would benefit from further guidance from the regulators on the use of AI, including specifics on parameters, evidential standards ,and the depth and quality of the explainability requirements. This will become more important as there is an ongoing move towards agentic AI.

Where to start the journey to AI in AML

Many financial institutions are at the early stages of planning how they will implement AI for AML.

Starting in the transaction monitoring function can be challenging as a first step on the journey. Transaction monitoring is relatively grey area for regulatory compliance when compared to screening for sanctions compliance: what is deemed compliant or non-compliant for transaction monitoring itself is less clear. But sanctions screening is a more ‘black and white’ decision, an entity is either sanctioned or it is not.

Generally, in transaction monitoring financial institutions will receive less feedback from the regulator or law enforcement to feed into the AI which makes model training more challenging. In the UK this is changing with the FCA projects around synthetic data, typologies, and AI testing.  

As a goal for AI in AML, there can be an overfocus on false positive reduction without a good understanding of false negatives or why these events need detailed investigation before they can be labelled as such to reduce risk exposure. Understanding why current rules or AI models are generating false positives is crucial to reducing them, without underestimating the risk of false negatives. True and false positives and negatives exist in a balance: it’s not a race to zero false positives if this means a drop in true positives as well. AI in AML should aim to reduce the number of repeated false positives that require manual investigation. Automations for the most commonly discounted alerts help release analyst time to investigate truly red flags. With this additional time and effort, teams should be able uncover new typologies or identify currently unrecognised risk.

Meeting financial crime compliance expectations  

Financial institutions face the extremely challenging task of improving AML effectiveness whilst remaining compliant. This can sometimes feel impossible: there is a push for the industry to leverage AI and automations to reduce costs and drive results, but this must be balanced against regulations around audit trails and explainability. The two can feel like opposing forces, when in reality effective AI for AML is built to be compliance first.

Discover the latest insights into AI regulations for AML across 40 different markets. Download the Napier AI / AML Index.  

Chair of the Royal Statistical Society’s Data Science and AI Section and member of FCA’s newly created Synthetic Data group, Janet started coding in 1984 and discovered a passion for technology. She holds degrees in both Molecular Biochemistry and Mathematics and has a PhD in Computational Neuroscience. Janet has helped both start-ups and established businesses implement and improve their AI offering prior to applying her expertise as Head of Analytics to Napier. Janet regularly speaks at conferences on topics in AI including explainability, testing, efficiency, and ethics.