Something we said? Don’t leave just yet!

For more information about latest events, news and insights, leave us your email address below.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form
Dismiss

Could ChatGPT play a role in fighting financial crime?

Will Monk, Chief Product Officer at Napier, considers the impact of language-based AI on the finance and banking sectors – particularly when it comes to fighting financial crime.

William Monk
June 27, 2023

Incredible excitement about the opportunities of Artificial Intelligence (AI) coupled with warnings about AI taking over our jobs and livelihoods have never felt more real than with the recent popularity of ChatGPT. Released by OpenAI in November 2022, the language-based, deep learning tool has already been making waves in the fields of search engines, writing, research, education, coding, and many others. In particular, it has been creating new competition for the tech giants in search engines and their results optimisation. With its extraordinary search capability, ChatGPT blurs the lines between original content and rehashed information, accessing the world’s vast library of language-based data.

But can its functions be extended to other sectors? A recent survey from The Economist Intelligence Unit found that “77% of bankers believe that unlocking value from AI will be the differentiator between winning and losing banks” - a clear signal that the stakes couldn’t be higher.  

While some of us will approach ChatGPT with caution - for example, the Italian Data Protection Authority (Garante per la protezione dei dati personali) made the decision to temporarily ban the service citing General Data Protection Regulation (GDPR) concerns, others are embracing this behemoth. Microsoft will integrate the tool into Office365, and it could also be game changing for online accessibility, with the AI powering tools such as Flowy.  

When looking at its potential impact on banking and financial services, we need to consider whether it is relevant to the sector before we discuss whether to rein in its power or harness it for good. As with many things in life, the short answer is ‘it depends’. So, let’s consider a couple of use-cases.  

Standardising Customer Service

Using ChatGPT for financial customer services seems like a safe enough option for delivering standardised sets of procedures, such as onboarding or 24/7 virtual assistants, for which many financial organisations already use chatbots. ChatGPT is a language-based model and is both much more powerful and more intuitive than anything we’ve seen to date.  

However, consideration needs to be given to issues such as bias, data protection and data ownership. There’s also a question over legal liability should ChatGPT give an incorrect answer which results in loss or damage to the human user. These challenges are within the remit of current teams in financial services, and undoubtedly can be resolved with relative ease. Furthermore, the level of risk is relatively low in such cases, which is why I foresee a large uptake of ChatGPT (or the likes of it) in the customer service arena.  

Compliance Process Optimisation

We might also see it applied to in-house processes, for example, to create compliance procedure first drafts of documents and templates, automate some initial admin checks, such as in loan applications: checking that the right documents have been filled out with more or less the right content.  

However, feeding this AI with such sensitive data without knowing where it may end up presents an enormous risk. It could even be possible for financial criminals to find a way to capitalise on this, through data breaches of chat history.  

Considering where ChatGPT, a free-to-use chatbot, receives and sends its content, it is curious as to why most of us have been so trusting up to this point. With concerns about privacy, false information, bias and factual accuracy coming to light day-by-day, we still do not know where exactly its knowledge comes from and the chatbot itself cannot provide a specific enough answer. Furthermore, OpenAI’s own Frequently Asked Questions (FAQs) states that ChatGPT has “limited knowledge of the world and events after 2021” (What is ChatGPT? | OpenAI Help Center.) This concerning combination of issues does not bode well for its use in sectors that require highly accurate paper trails and secure data guarantees.  

The potential for the wider use of Chat GPT is there, but due to these unresolved data privacy and security questions, I feel this will happen as a second stage of implementing ChatGPT (or the like) in the internal processes of financial institutions.

Analytics Enhancements

In terms of analytics, it might appear at first glance to be a no-brainer. If ChatGPT can understand and respond to humans, surely doing complex calculations is simply second nature to it?

ChatGPT is a language model with the primary function of searching text and presenting the user with a text answer to their question. In relation to financial crime prevention, analytics are focused on number crunching and detecting human behaviour patterns from those numbers.

While ChatGPT can access a huge store of language based-training data from published sources, there is no parallel, publicly available, high-quality data-store to which banks and other financial institutions contribute, due to data privacy requirements. It can perform analytics and do a lot of the tasks analysts would but you need to give it the data, which is the big no-no here. Also, just because it gives an answer, does not mean it’s the correct one.

There’s not a relevant use case for ChatGPT in terms of analytics right now, but we anticipate that bespoke generative models and start-ups will begin to pop up. It’s worth noting that AI analytical models are already in use in our product.

With regulatory and compliance requirements such as Anti Money Laundering (AML) mandates, the perils of inbuilt bias and unverified information skewing the outputs of ChatGPT can be very severe. Failure to provide accurate and timely reporting on suspected financial crime can lead to fines in the hundreds of millions of dollars, as well as potential imprisonment for those responsible for the failings.  

Given these potential consequences, and that ChatGPT and similar tools are too general and prone to making mistakes, we do not see a role for ChatGPT (or the like) in compliance. It would need a focused ‘FinCrimeGPT’ to change that.

The bottom line is that financial services need watertight compliance to function properly. This is especially true when it comes to AML procedures – there is no margin for error, and we cannot risk criminals knowing they are under investigation. While existing AI tools, engineered specifically for this role, find great success in hunting for irregularities in customer behaviour and meeting regulatory compliance requirements, ChatGPT’s potential in the sector is questionable. Not only is it not number proficient, but the margin for error is unacceptable in such an important area of societal wellbeing. In a 2018 Refinitiv study the aggregate lost turnover as a result of financial crime is $1.45 billion. It’s not exactly a figure that we should leave up to the unknown.

Regulatory Red-Tape

AI implementation is starting to become significant, with the Bank of England finding in a 2022 survey that “72% of UK financial services firms reported using or developing Machine Learning (ML) applications”. However, in this highly regulated sector, those backing ChatGPT as a solution for compliance or preventing financial crime are likely to be stopped in their tracks early on. With the lack of understanding about where it draws and sends information, we cannot expect this technology to hold up in front of a regulator or GDPR scrutiny. A discussion paper by the Financial Conduct Authority (FCA) and Bank of England outlines some of the regulatory issues under consideration. Notably in the case of ChatGPT, it would be hard to ensure the objective of “making sure that AI is appropriately transparent and explainable”. Rather than language-based, public solutions, the authorities will want to see solid RegTech innovation to fill in the gaps and allow them to approve it as safe and secure AI.

So, while language-based AI-based solutions like ChatGPT have the potential to be helpful in financial services and banking, they currently fall far short when it comes to regulation. The results from generative models can feel correct, but they can be factually wrong. The models can also be gamed to give an intended output, meaning that companies should adopt with them caution and while employing all the usual processes that are required for confidence.

Improve your compliance processes with an award-winning solution

Get in touch to see how our intelligent platform can help your organisation transform its compliance; or request a demo to see it in action.

 

Photo by Emiliano Vittoriosi on Unsplash

By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Privacy Policy for more information.