Something we said? Don’t leave just yet!

For more information about latest events, news and insights, leave us your email address below.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form
Dismiss

Big Tech breaking into AML: 4 strategic considerations

What does it mean now Google has joined the AML AI industry?

Janet Bastiman
June 28, 2023

Until now, the biggest tech players in the world have shown little appetite for financial crime prevention. In fact, banking and payments organisations around the world have been calling for them to be part of the solution given the rate at which organised criminal gangs conduct fraud and scams on their platforms. But with the announcement that Google has been developing an artificial intelligence (AI) platform for financial crime compliance (FCC), what does this mean for the industry?

In all likelihood? Probably not much. While Google and other big tech players do have huge teams and resources dedicated to AI research, they don’t have deep domain expertise in financial crime compliance.  

AI in AML Compliance: Explainability

Anti-Money Laundering (AML) regulations require extremely high levels of explainability and transparency of the systems implemented by financial institutions. And for good reason: regulators and law enforcement need to build a meticulous case if they are to successfully bring down global criminal networks. While financial services regulators have embraced the concept of AI and the operational efficiencies it might bring, they are not yet comfortable with the idea of AI designing AML strategies. As such, there is little specific regulatory provision for the use of AI in AML and financial institutions must tread carefully in how they apply new technologies to their FCC operations.

Read more about the need for and history of explainability in AI

AI Challenges in Financial Services: Expertise

Many large banks, payments, and gaming organisations have data science teams working on the appliance of science to their business. But more often it is on the customer communications side. Finding data scientists with specific domain knowledge in financial crime compliance is crucial when designing and tuning systems, as well as responding to regulator requests for audit and explanation. Most financial services organisations need expert partners like Napier for the lifecycle of the AI system in FCC, when it comes to tuning and training models beyond the initial setup even a tech giant like Google lacks expertise.  

The success of an AI AML deployment lies in the data quality at inception. Historical data may not be appropriate for training models. FCC experts at Napier support clients to understand existing data because poorly trained AI models can be extremely dangerous for business-critical activities. Type III statistical errors, where the system returns the right answer but for the wrong reason, are one of the major concerns of regulators in the application of AI to AML. Reducing false positives can be easy but can also come at the expense of decreasing true positives and highlight bias in the data. If a financial institution does not have its own internal AML AI experts, it should approach projects with caution and partner with a domain specialist that can offer clear advice and guidance.  

Potential Benefits of AI in AML: Performance

When considering how AI might uplift performance in AML operations, such as False Positive Rate (FPR), organisations need to have a really strong baseline of existing system performance against which to verify any new AI system. For example, Napier’s customers can achieve a 97% reduction in FPR using rules-based strategies, and then an additional 40% uplift when applying AI Advisor to provide suggested additional strategies.  

Read how Napier uplifted Financial House’s compliance capabilities

It can be challenging for FCC professionals to cut through the statistical noise when technology companies combine percentages and multiplies in their performance metrics. The statistics quoted by Google indicate there is still a large number of false positives (by its own definition): If two percent of hits result in a Suspicious Activity Report (SAR) filing and overall hits have dropped by sixty percent, then there are still between four and eight percent ‘true positives’ as measured by SAR submission.  

Big Tech in Banking & Payments: Competition

This is not the first foray of big tech into the banking and payments market. Given that the Google AML system requires financial institutions to pass data into the Google environment for processing, I anticipate some hesitancy. Both from a data sovereignty and protection perspective, and a competition point of view. Banks and payments players have been hesitant to place their payments data into the cloud environments’ of tech giants who have a competitive offering, and rightly so.  

Napier would strongly recommend not to send data into an API network even within the same technology provider, as the data may leave an isolated network and Personally Identifiable Information (PII) may well be stored in areas of the cloud environment outside your control or made available for internal research or other authorised third parties without your consent or knowledge.

AML systems, like all other business-critical systems in banking and payments, are heavily embedded into highly regulated products and services. Institutions build them into ten-year plans, and invest heavily to train teams around platforms. Given the ‘move fast and break things’ mantra of big tech, and the graveyard of abandoned verticals and products, financial institutions may be hesitant to adopt a platform that may only be flavour of the month.

Weave AI into your AML systems with this 12 step guide on the optimal path to AI implemention in financial crime compliance:

Our eBook guides you through our experts’ recommended process for AI implementation, addresses some of the most common pitfalls and challenges financial institutions face in this journey, and assesses the current regulatory landscape around the use of AI

Chair of the Royal Statistical Society’s Data Science and AI Section and member of FCA’s newly created Synthetic Data group, Janet started coding in 1984 and discovered a passion for technology. She holds degrees in both Molecular Biochemistry and Mathematics and has a PhD in Computational Neuroscience. Janet has helped both start-ups and established businesses implement and improve their AI offering prior to applying her expertise as Head of Analytics to Napier. Janet regularly speaks at conferences on topics in AI including explainability, testing, efficiency, and ethics.
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Privacy Policy for more information.