Something we said? Don’t leave just yet!

For more information about latest events, news and insights, leave us your email address below.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form
Dismiss

Frontier AI: Risks and Rewards in Financial Crime Compliance

As we journey through the evolving landscape of generative AI, it is set to have a huge impact across all sectors, including financial services.

Elise Thrale
November 2, 2023

As we journey through the evolving landscape of generative AI, it is set to have a huge impact across all sectors, including financial services. However, this journey is not devoid of challenges, with recent focus from the UK government, who recently published a report covering the ‘capabilities and risks from frontier AI’ ahead of the global AI Safety Summit, hosted in the UK. Frontier AI refers to highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.  

In the run up to global AI safety event, UK Prime Minister, Rishi Sunak, emphasised the need for a candid examination of the risks associated with artificial intelligence, and the importance of governments tackling risks effectively.  

In his speech at the event, Rishi Sunak described the promises "new opportunities" for economic growth, but that AI also brings with it "new dangers." It is an acknowledgment of these dualities that will pave the way for responsible AI integration.

The timeline of generative AI

Generative AI, part of the frontier AI discussion, is an umbrella term for machine learning algorithms that generate various types of output, such as sound, images, text, and videos, based on given inputs. Large Language Models (LLMs), such as GPT (Generative Pre-trained Transformer), are language-specific AI models with billions of parameters.

The architecture behind many of these generative AI models is called Transformers. These models excel in contextual understanding by assigning different weights to each part of the input data, allowing them to maintain a sense of what came before to improve what comes next.

ChatGPT, introduced by OpenAI in November 2022, has demonstrated its transformative power across industries. It has made a momentous impact, increasing awareness on generative AI, and has redefined how information is accessed and utilised, blurring the line between original content and repurposed information.  

However, the roots of Generative AI extend further, with the introduction of Siri by Apple in 2011, and Yoshua Bengio’s work on probabilistic language models in 2003 laid the groundwork for revolutionised language understanding. Generative AI has a rich history, with remarkable advancements made over the years. However, recent acceleration to this technological progress comes with a pressing need for safety and trust.

Addressing the risks

These deep learning models, like convolutional neural networks, add layers of abstraction to the process, making explanations more challenging to derive from its outputs.

The highly regulated nature of the financial sector presents a significant challenge for integrating generative AI solutions. Regulators demand transparency and explainability, aspects that current models struggle to provide. It is crucial to develop specialised RegTech solutions to meet regulatory standards and maintain customer experience, as the outcomes of the decisions of these systems can even lead to blocking legitimate transactions until reviewed by manual analysts.

The journey of generative AI, exemplified by ChatGPT, is marked by its potential to reshape industries and the financial sector's interest in its capabilities. As the financial industry explores this evolving landscape, it must address challenges related to data privacy, accuracy, and regulation.  

As Dr Janet Bastiman, Napier’s Chief Data Scientist and Chair of the Royal Statistical Society’s Data Science and AI Section put it, “Education rather than scaremongering is the key. We need transparency on the accuracy of these models, the ability for the end user to understand where the data was sourced that led to the decision. We need to look at the potential for issues and benefits and how this will change the nature of society without resorting to binary thinking. For this we need diverse voices.”

Striking a balance between innovation and safeguarding compliance processes is paramount in this ever-changing world of AI. It is this equilibrium that will determine the successful integration of generative AI in financial crime compliance and beyond, ensuring that its promise is realised while mitigating its risks.

Read more about how to address AI challenges in in our whitepaper on the optimal path to AI in financial crime compliance

Photo by Mauro Sbicego on Unsplash

By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Privacy Policy for more information.