Something we said? Don’t leave just yet!

For more information about latest events, news and insights, leave us your email address below.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form
Dismiss

AI for financial crime compliance – a threat or a solution?

AI can play a dual role in both combatting and facilitating financial crime, so how can experts better utilise AI for good?

Janet Bastiman
September 19, 2023

In an era where technology continues to reshape industries, artificial intelligence (AI) stands as a formidable game changer, especially in the realm of financial crime compliance. However, while AI offers remarkable potential in streamlining processes and automating routine tasks, it brings forth challenges that call for a more proactive approach to safeguarding financial ecosystems.

The fine line between AI as a threat and a solution

AI's dual role in both combating and potentially facilitating financial crime poses many challenges. On one hand, AI tools can identify fake documents and impersonation attempts, thwarting criminals. On the other, malevolent actors are using generative AI models to create more realistic fake invoices and records to facilitate money laundering.

Read: How and why does AI go wrong in financial crime compliance?

The regulatory landscape surrounding financial crime compliance is intricate and varied, spanning different jurisdictions and organisational requirements. AI can facilitate the review process by analysing large volumes of data to extract relevant insights quickly. However, it is imperative to ensure that AI systems align with the specific nuances of each organisation's regulatory environment.

While AI can significantly bolster defences against fraud and money laundering, reliance on opaque algorithms can undermine trust. This highlights the necessity of maintaining human skills in command.

The way forward- transparency and explainability

Criminals are constantly adapting their methods to launder money, and so we cannot rely on just static systems to detect financial crime. Explainable AI helps us to catch and investigate potential criminal activity, and rightly flag it to law enforcement.

To truly trust and understand the alerts generated by AI systems, compliance professionals need visibility into the algorithms' decision-making processes. This visibility extends to understanding why certain alerts are ignored, escalated, or even auto-closed. There is a common misconception that we need to drive false positive rates down to zero, but this would make it harder mathematically to detect constantly changing illegitimate activity. We must also investigate any activity that is unusual compared to the risk profile and, if there is a case to be made, we need to demonstrate it. By enhancing algorithmic transparency, organisations can bridge the gap between technology and human comprehension.

Read: Ethics in AI: Fighting financial crime with a conscience

As AI systems grow in complexity and capability, so does the need to ensure their reliability, fairness, and accountability. This is why it's crucial to refrain from considering testing and explainability as merely initial steps in the process. Instead, they should be seamlessly integrated into the entire production cycle and the model's live deployment.

The human element remains critical in adapting and fine-tuning AI models to address unique compliance challenges. A team with varied backgrounds can provide unique perspectives on data analysis, helping to spot potential issues before model construction even begins. Diverse viewpoints can help identify biases, imbalances, and gaps in the data that might otherwise go unnoticed. Cross-functional teams such as KYC, data, processes, regulations, and systems should work harmoniously to minimise such vulnerabilities.

Listen to Janet speak more about the need for explainability in AML in this podcast

Without a solid foundation of accurate and representative data, even the most advanced algorithms may struggle to produce reliable results. This is especially evident in scenarios where financial behaviours or trends change over time, rendering outdated data irrelevant.  

Financial institutions must invest in AI-driven security measures, continually refine algorithms to adapt to evolving threats, and ensure transparent practices that align with ethical standards. Systems must be designed for end-users to understand and assess AI models' validity through back testing.

In the end, AI is neither purely a threat nor a solution in the battle against financial crime—it's a dynamic force that demands responsible stewardship to navigate this delicate balance. By embracing its potential while remaining vigilant against misuse, we can harness AI's power to protect financial systems and ensure a safer, more secure future for the industry and its stakeholders.

AI Regulations – Global Approaches to Combat Money Laundering

Our latest whitepaper- “AI Regulations: Global Approaches to Combat Money Laundering” gives compliance teams an insight into the regulatory direction of travel, and ensures they have the right governance in place to implement AI ethically. Get your copy here

Photo by Michael Dziedzic on Unsplash

Chair of the Royal Statistical Society’s Data Science and AI Section and member of FCA’s newly created Synthetic Data group, Janet started coding in 1984 and discovered a passion for technology. She holds degrees in both Molecular Biochemistry and Mathematics and has a PhD in Computational Neuroscience. Janet has helped both start-ups and established businesses implement and improve their AI offering prior to applying her expertise as Head of Analytics to Napier. Janet regularly speaks at conferences on topics in AI including explainability, testing, efficiency, and ethics.
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Privacy Policy for more information.