Something we said? Don’t leave just yet!

For more information about latest events, news and insights, leave us your email address below.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form
Dismiss

Make AI simple for analysts to use

Technological advances combined with important ethical and moral considerations have helped ensure the best AI-powered AML systems are now highly explainable and therefore simple for data analysts to use.

Janet Bastiman
January 21, 2021

If you’re still using legacy AML tech, or even spreadsheets, the thought of implementing AML technology powered by artificial intelligence (AI) might be pretty daunting.

“But I’ll need an army of data scientists!” I hear you say.

Likewise, you may already be using AI but despite best efforts, nearly everyone is grappling to understand its outputs.

Yet the reality is, the use of transaction monitoring systems that employ artificial intelligence (AI) capabilities should, quite simply, not be something that’s difficult for data analysts to understand.

Technological advances combined with important ethical and moral considerations have helped ensure the best AI-powered AML systems are now highly explainable and therefore simple for data analysts to use.

What is explainable AI?

Explainable AI is transparent in the way it works. The FCA defines AI transparency as “stakeholders having access to relevant information about a given AI system.”

All users of AI-enhanced systems should be able to access the information they need to understand its insights.

The Royal Society emphasises that the most useful explainable AI systems will provide different users different forms of information in different contexts. For example, technical information would be provided to a developer while accessible information would be provided to a lay-user.

So why has explainability in AI been such a challenge?

AI hasn’t always been as explainable as you might expect. The Royal Society highlights that some of today’s AI tools can be highly complex, if not outright opaque. Workings can become too difficult to interpret and so-called ‘black box’ models can be too complicated for even expert users to fully understand.

For example, when flagging a transaction or customer, some AI models provide a numerical score while others give a prediction, which in turn converts into a confidence score. Scores can be very hard to interpret, because you need to first understand how it is calculated. Many data analysts do not understand AI calculations, which makes it difficult for them to explain the system’s workings to stakeholders.

How does Napier overcome AI explainability challenges in AML?

If you’re looking at thousands, millions, even billions of data points without AI, it will be very difficult to spot suspicious activity. Mathematical averages can give an easy-to-understand view but if behaviour is changing and there are lots of averages, you may not see big spikes.

AI can be helpful because, in contrast, it looks for discrepancies in averages – it uses the same lenses but a different viewpoint.

Taking this a step further, Napier ensures that despite the use of complex algorithms, these discrepancies are highlighted in a way that is easily readable and understandable for the person using the system.

With no mysterious black boxes, Napier’s AML solution overcomes explainability challenges by providing relevant information and supporting its insights using one or a combination of the following:

• Data rich graphs and charts

• Highly visual activity heat maps

• Simple, easy to understand explanatory sentences

It is important that AI-enhanced AML systems explain decisions behind scores so data analysts can know and understand why a transaction has been flagged as unusual. This may be because it is 50% different to other transactions on a particular day, for instance.

Tip: AI will be most helpful if it works in line with company risk appetite and parameters. This will help data analysts to make sense of decisions, whether that be maximum transaction value or transaction currency.  

3 reasons why explainability in AI is so important

From efficiency to autonomy, explainability in AI creates many benefits:

1. Quick and simple for analysts to understand

AI gives power back to data analysts. They can immediately access the information they need to understand the insight, and make the right decisions at the right time. They can also explain the system and their actions, and if required, can extract information to write an accurate, informed board or SAR report.

2. Reduces the need to hire highly skilled data scientists

Explainability is really important because it enables users like data analysts to be able to actually use the system without the support of highly specialised staff.

Data scientists are expensive and in short supply, which makes it very difficult to hire the right people with the right skills, particularly with the industry knowledge required in the AML space.

Since explainability enables analysts to understand decisions made by AI, companies can avoid the extra expense associated with hiring data scientists.

After all, what’s the use of a highly sophisticated set of algorithms if you need to employ an army of data scientists to interpret the outputs?

3. Encourages the adoption of AI technology, which in itself offers transformational insights into suspicious behaviour

Some companies steer clear of AI simply because they mistakenly believe all AI systems are complex and difficult to work with. However, when these companies discover how easy AI can make AML compliance, their opinions will most certainly change. AI adoption in AML is not a matter of if, but when.

Looking to the future  - AI and AML

AI explainability goes hand in hand with Napier’s machine learning research and development.  While regulators and banks may still be on the fence about adopting AI right now, the technology is here to stay.

We believe that technology should drive efficiency for its users, and we believe that AI is the technology to do this.  

As such, our focus at Napier is for AI to be completely accessible to all businesses, for AI to be easy to use and for AI to be easy to understand.

Book a demo to see just how simple AI is

The best way to see how simple AI is, is to see it for yourself. For information, advice or to book a demo on any Napier product, please get in touch with our expert team.

Chair of the Royal Statistical Society’s Data Science and AI Section and member of FCA’s newly created Synthetic Data group, Janet started coding in 1984 and discovered a passion for technology. She holds degrees in both Molecular Biochemistry and Mathematics and has a PhD in Computational Neuroscience. Janet has helped both start-ups and established businesses implement and improve their AI offering prior to applying her expertise as Head of Analytics to Napier. Janet regularly speaks at conferences on topics in AI including explainability, testing, efficiency, and ethics.
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Privacy Policy for more information.