Something we said? Don’t leave just yet!

For more information about latest events, news and insights, leave us your email address below.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form
Dismiss

How generative AI and large language models could revolutionize financial crime compliance

David Choi from Oliver Wyman shares some use cases in which generative AI can be used for financial crime compliance.

David Choi
January 17, 2024

Since the introduction of ChatGPT in 2022, there has been a significant focus on large language models (LLMs) and generative AI solutions. Compliance functions are actively exploring how these emerging technologies can optimize compliance operations and technology. Financial crime compliance teams face the challenge of sifting through vast amounts of structured and unstructured data to identify higher risk customers and detect suspicious activity.

Financial crime compliance teams are often forced to be reactive rather than proactive in managing risks and issues. However, generative AI and LLMs are unlocking transformative use cases – such as scanning and analysing external events, drafting investigation reports, and performing thematic or root cause analysis – that could advance and revolutionise the way financial crime compliance functions operate.

Generative AI and LLMs should augment not replace

As mentioned in my recent report, ‘ChatGPT and the Compliance Function’, AI-enabled solutions cannot think like humans and final decisions or reviews must be done by qualified financial crime professionals. Instead of replacing humans, look for use cases where LLMs can accelerate processes and allow risk professionals to focus on higher value analysis rather than more mundane tasks (e.g., data collection). Or use LLMs to gather and catalogue information such as sanctions violations to identify themes or root causes that would take a human many hours or days to perform manually. But in all cases, human expertise and oversight is a must – financial crime professionals must make the final decision and perform QA of outputs.

More common use cases for generative AI and LLMs in financial crime compliance

1. Monitoring external events and risk assessment

Financial institutions are tapping into the power of LLMs to stay on top of regulation changes and external events that could impact their business. They can also use LLMs to scan through internal policies, controls and incidents to perform an initial risk assessment and identify potential changes required.

2. Screening and adverse media

Effective client screening is essential for identifying potential risks during customer onboarding and periodic reviews. LLMs bring a whole new level of sophistication to this process. By providing context and additional information to name matches or summarizing adverse media results, LLMs can improve the accuracy and efficiency of name screening and adverse media searches.

3. Investigations and reviews

LLMs and generative AI have the potential to improve transaction monitoring investigations and customer due diligence review processes. Currently, these processes are often manual, tedious and time-consuming, involving tasks like data collection, result summarization, and drafting findings or conclusions. However, with the power of LLMs and generative AI, these tasks could be streamlined, saving valuable time and effort for compliance professionals.  

4. Reporting and thematic analysis

Financial crime teams frequently face challenges when it comes to gathering and reporting information across various functions, business lines, or jurisdictions. This becomes especially difficult when trying to understand thematic problems or root causes. LLMs have the capability to analyse large datasets, extract pertinent information, and generate comprehensive reports which could greatly assist financial crime teams in identifying common patterns across different issues or areas and gaining valuable insights.

Getting past the barriers

While the potential benefits and use cases for generative AI and LLMs in financial crime compliance become more apparent, the challenges of implementation remain significant. Establishing proper governance and controls is crucial to ensure the successful deployment of these solutions and to avoid any potential mistakes. Depending on the specific use case, these solutions are often treated as models, making explainability and validation of the models a critical factor for success. It is important to plan to address these challenges effectively:

  • Establish communications and engagement plans for internal and external stakeholders, such as regulators, senior business leaders, model risk management, and internal audit;
  • Align transition plan and metrics to improve risk coverage and enhanced quality, not just on efficiency improvements;
  • Assess team skills and identify up-skilling required to design, develop, and operate generative AI or LLM-based solutions.

Learn more about how AI can improve client screening in Napier’s whitepaper ‘Sharpening Sanctions Compliance with NextGen Client Screening’.  

Photo by vackground.com on Unsplash

David Choi is a Partner at Oliver Wyman based in New York. He is a member of the Digital and Anti-Financial Crime Practices. Prior to Oliver Wyman, David was a Technology Strategist with Microsoft, where he worked with global banks to transform IT platforms and digitize banking products by leveraging public cloud and advanced data solutions. David’s expertise is in compliance and regulatory technology.
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Privacy Policy for more information.