We held a webinar in collaboration with the Asian Institute of Chartered Bankers (AICB): Follow the Money – Understanding the Gaps and Applications of Artificial Intelligence in AML.
To achieve a better understanding of how artificial intelligence (AI) can detect the flow of dirty money, the session welcomed the following panel to share their expertise and experiences:
- Rana Datta, Managing Director, Risk and Compliance Lead SEA & RegTech Practice Lead APAC, Protiviti
- Chu Mansfield, Former Senior Compliance Officer, Global Markets APAC, BNP Paribas and Former Senior Anti-Bribery & Corruption Advisor, Deutsche Bank.
- Dr Janet Bastiman, Chief Data Scientist, Napier
- Robin Lee, Head of APAC, Napier (Moderator)
Here’s a summary of the key points made by the panel:
Criminals are already using AI
Rapid digitalisation has changed the world dramatically over the last two years. Everyone is going digital; including criminals who are using AI and machine learning algorithms to commit financial crimes in novel ways.
With financial crime rising, financial institutions need to innovate continually to try to get ahead of criminals. Traditional rules-based systems simply can’t recognise new money laundering patterns on their own. As Rana metaphorically illustrates, all too often, the horse has already bolted.
Janet adds how successful sophisticated criminals are in making their money look legitimate. The best way to address this is through use of AI, which offers multiple techniques that can help detect suspicious activity.
AI offers a range of techniques to detect behaviour that isn’t genuine
As money launderers constantly refine their tools and methodologies, AI’s exceptional ability to recognise behaviour that deviates from normal behaviours becomes all the more essential.
Whether you are looking for established typologies or new unknown ones, AI offers different models to surface clues and deeper insights.
AI applications for AML include detecting:
- Known money laundering typologies – these are the well-defined typologies. Here, rules can be applied for simple cases and supervised learning can be used for complex cases. Rules are not able to detect all suspicious behaviours, but AI’s fuzzy matching ability is able to detect events even when there’s not an exact match.
- Partially known money laundering typologies – these are typologies with money laundering indicators that are not fully defined, or may be evolving. Since they cannot be defined with rules, AI techniques using semi-supervised learning can be used to detect them. When looking at partial knowns, a feedback loop with analysts is important to analyse features and investigate results. This will determine whether they are truly criminal typologies or just changing behaviour. The analyst feedback facilities model training.
- Unknown money laundering typologies – these are new typologies that are yet to be defined and target negative space where analysts are not currently looking. The typologies need to be detected through sophisticated AI techniques complemented by experienced analysts who are able to review the data. It’s not possible to use a supervised learning approach here because there is no definition of what you’re looking for. Rules are also unnecessary because you are using unsupervised AI to look for unusual activity that wouldn’t otherwise be detected.
It’s important to remember that single anomalies are not necessarily indicative of suspicious behaviour, facts and circumstances need to be fully considered.
Explainable AI is required to plug the gaps between AI systems and regulatory and user expectations
In recent years, regulators and users alike have come to recognise how important explainable AI is – both for system usability and for AI trust and acceptance.
Regulators say explainable AI is a way for end users to understand AI systems so they can be used in business as usual activities. Lipton (2018) defines four characteristics that make AI explainable: transparency, justification, informativeness, and uncertainty estimation.
It should not be necessary to employ huge data scientist teams just to understand AI. Napier believes that, unless a non-technical end user can understand why the AI gave the result it did, then it is not explainable AI.
This is important because AI models are not and never will be 100% accurate. We therefore need to build on cases to understand why.
Janet explained three techniques for getting explanations from AI systems, including post-hoc explanations, instance-based explanations, and interpretable AI.
Ultimately, explainable AI should be able to give analysts all the information they need directly through the result and its explanation, and indirectly through test results, highlighting the data that gave the result and allowing for further investigation.
Worldwide legislation is likely to change, with regulation concerning explainability expected to become stricter and more detailed. This is needed to ensure human oversight of critical systems.
Data access and management is essential for AI
It’s important to have a very clear data strategy before you start with AI. Without a data strategy, it won’t be possible to match well-defined typologies. And when looking at unknown typologies, without the correct data the results may still not give the insights that are needed.
Since AI requires context, it can only interpret the data it has been given. This means that when considering accuracy and relevance, it’s important to consider what’s not in your model as much as what is.
Despite the importance of data to AI, the panel agreed that access to data is an issue. All too often, data is sitting in disparate systems and many companies are battling with data silos. On an organisational level, cleansing and understanding data is key. On a global level, we need to think about how we can share data worldwide; criminals rely on the fact that companies can’t follow their trail because their data isn’t shared.
AI offers huge potential for improving the effectiveness of AML efforts. It equips analysts to detect all types of money laundering typologies, ranging from those that are well-known to those that are currently unknown.
Yet, the adoption of AI still has a long way to go. For most banks and financial institutions, the journey has only just begun – or hasn’t even started. While many board risk levels are still not yet fully comfortable with AI, we need to get to the point where AI is making level 1 decisions.
Fortunately, the increasing explainability of AI systems will help build user trust in using AI. This, alongside the regulatory support we see for using AI in AML, will ultimately encourage and support adoption.
Discover how AI is implemented in our next-generation financial crime compliance solution