Something we said? Don’t leave just yet!

For more information about latest events, news and insights, leave us your email address below.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form
Dismiss

Building trust and confidence in AI

Dr Janet Bastiman, appeared on the MCubed podcast to discuss explainable AI (XAI), its history and its uses in modern society – especially in anti-financial crime.

Napier AI
November 4, 2021

Our Chief Data Scientist, Dr Janet Bastiman, recently appeared on the MCubed podcast to discuss explainable AI (XAI), namely its history and its applications in modern society – especially in anti-financial crime.  

Watch the episode below:

A question of trust

We devolve trust all the time. Every time you get in a vehicle that you're not driving, or an aircraft with its autopilot, or even when you're watching the news - we relinquish control of a situation or information to someone or something else.  

Sometimes we find this easy and sometimes this feels uncomfortable, and we don't want to do it. Here is the problem: trust depends on your experience and the context of the situation.

As an example, Janet posed a simple question: “Do you trust me, and what I'm about to say?”  

She pointed out that as an invited subject matter expert, listeners may view her as inherently trustworthy – or take the stance that trust has to be earned. She acknowledged that listeners may even be neutral towards her or be waiting to see if she says something they agree with, which would increase trust in her and what she has to say.

Also to be considered in the question of trust is that we as people have an implicit confirmation bias: we trust information that agrees with our preconceptions readily and we challenge information that disagrees with us vehemently. Even when presented with underlying facts, it can be difficult for us to trust information that differs from our expectations.  

Why do we find it hard to trust AI, and why is it important to do so for financial crime compliance?

When we're talking about the ability to trust artificial intelligence (AI), Janet firmly believes in the need to ensure that the end users, those who act upon or are affected by the information from it, are comfortable enough to not just understand a result, but enough to believe and trust it.

Janet has spent the last twenty years working on complex technical systems and ensuring concepts like machine learning and data science are accessible. She firmly believes that while not everyone needs to understand the mathematics, they do need to trust the outputs.

“Not everyone needs to understand the mathematics (of AI), but they do need to trust the outputs”.

At Napier, we provide AI-enhanced solutions to detect money laundering and financial crime across financial services industry worldwide, and when you're dealing with criminal activity of any sort - you need evidence to make a case.  

The relationship of trust between our system and users is important to us, and it isn't good enough for the system to say ‘stop this transaction,’ or ‘this transaction is suspicious.’ We need to provide an explanation as to why this is the case.

Legislation around AI explainability and ethics

Numerous publications worldwide are driving legislation for AI, and the biggest topics we’ve seen debated in the last few years have been AI explainability and ethics.

The use of AI in high-risk situations (for example the outcome has a material impact on the end user) calls ethics into question, causing regulatory bodies to demand that a human be kept in the loop, or that the results are tested for fairness and explainable.  

The concern that critical decision-making could be handed over to systems that in themselves have little or no direct oversight, is widely held and leads back to the problem of who to blame when things go wrong.  

So rather than try and regulate the techniques themselves - because these are moving so quickly - the guidelines, legislation and best practices are pushing for human oversight, and shifting the conversation away from the technology itself to how it's used.

AI and regulatory guidance in the financial sector

Examples of current guidance on AI for the financial sector:

  • The Monetary Authority of Singapore (MAS) has recently published principles stating that explainability should be a priority in the choice, development, and implementation of AI models, tailored to the understanding of key users and stakeholders. MAS also cautions that we should seek to understand and harness the potential of AI while mitigating its dangers.
  • New legislation from the European Union demands explanations and a human in the loop for high-risk decisions, but it does not clearly define what constitutes a high-risk decision.

The current trend is that the topics start off as discussions, papers or principles, and go on to form guidelines and legislation within a matter of years.  

Proposed regulations often lack detail, and the nuance for implementation. This is partially because they must: technology is moving so quickly, that overly specific legislation becomes outdated very quickly, and therefore it needs to stay at an abstract level.

Hence, it's important to maintain an awareness of what is topical in AI discussions and also to recognise the biggest myths about AI.

AI myth #1: AI can’t be accurate and explainable

Decades of research and industrial use has created systems that perform well, but explainability showing how results are achieved has not been important. It has only been in the past five years that clarity around how AI works has become important again, in correlation with its pervasive nature in society.

An unfortunate trope that’s been prevalent in the AI community for years is that you must choose between explainability and accuracy in an AI model. This is a logical fallacy and creates another trust problem.  

But where did this myth come from?  

In 2016 the Defense Advanced Research Projects Agency (DARPA) in the USA launched a project to promote explainable AI as they deem it necessary to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.  

In the initial call for proposals, a  reference to new deep learning models stated that although more opaque models are more effective, they are less explainable; implying that transparent, more explainable models are therefore less effective.  

This fallacy can be seen repeatedly in the AI community, but when pushed for sources, its perpetuators refer to the DARPA study, which is neither good science nor up to date.

For a more in-depth look at the history of AI and explainability, take a look at this summary of a recent talk Janet gave.

When is explainability for AI needed?

In the UK, the Information Commissioner's Office (ICO) and the Alan Turing Institute conducted a scientific study looking at when people want explainability, and the answer was a combination of context and personal importance.  

AI myth #2: You can’t explain AI to end users

The second big myth in the AI community is that you can't explain things to end users. In essence this means that explainable AI (XAI) is reserved for data scientists only.

This misconception comes from one of the definitions of XAI: that models and techniques in the application of AI are such that the results of the solution can be understood by human experts, not necessarily the lay users.  

Converting a data scientist explanation into something that's suitable for a non-data scientist and end user is a lot of effort. It can be argued there is an inherent need to maintain mystery around AI which has manifested in resistance in many practitioners to make it accessible.  

Janet’s belief (and the belief held at Napier) is that unless the end user of the system can understand why the AI gave the result it did, then it is not explainable AI.  

Testing and trusting AI – should you?

One thing that's ingrained in anyone in FinTech is that past performance is not indicative of future results. Everything has 0% failure rate until it fails for the first time.

This sentiment echoes another large myth that circulates in the AI space: if you test your system well enough, you don't need to explain it.  

This is known as deferring trust, and it’s something we all do every time we board an airplane, train or car. We don’t understand how they work, or what each mechanical component does, but we trust that they have been tested and that the pilot or driver is experienced enough to handle things if they do go awry. In these situations, we rarely check the statistics of these environments and instead base our confidence on historical performance.

AI myth #3: statistics are all you need to understand AI

The final AI myth Janet addressed concerned statistics, and the commonly held misconception that if you understand probabilities, that’s all the knowledge you’ll need to understand what AI is doing.  

Most people understand probability perfectly well (such as when there is a 20% chance of rain), but statistics are open to interpretation, and should not be taken out of context.  

For example, a 5% chance of rain is considered low, and we would consider it unlucky if it did rain. But if an AI told you that it was 95% likely to be safe to cross the road, the impact of that 5% is much different than a 5% chance of rain.  

How high would the accuracy need to be before you trusted that AI? Is there a value that's high enough?

Too often, AI can be preoccupied with measures of accuracy on percentage scores and confidence intervals on the basis that a lot of nines are good. To quote Charity Majors, “nines don't matter if users aren't happy.” Scores, statistics and probabilities on their own are not enough, you need to have that trust.  

The challenges for data scientists in implementing explainable AI

Implementing explainable AI can be a big change for companies who may be reluctant to replace existing AI systems. However, there’s research from Cynthia Rudin’s lab that looks at creating interpretable layers for neural networks, where concept transparency layers can be added in place of batch norms in your neural network to increase explainability without affecting performance. What interpretable layers do is directly indicate the important aspects of the data at each stage.  

These systems do require work, firstly, to define the concepts, but also to create human-friendly results.  

So, there is a slight trade-off depending on whether you want to put the effort in to change what you've already got to make it more interpretable, or to add post-hoc explanations to your models. But none of these techniques affect the accuracy or mean that you don't need to test your system as well.

What else do you need besides explainability?

There are several factors that can undermine trust from end users in an AI system. Too many times, as practitioners, we can dive in without doing the proper risk analysis and attempt to solve problems without really considering the data that's available, inherent problems that impact the model, and ethical considerations.  

Explainability by itself is not enough to foster trust: data strategy and testing are also crucial considerations, along with documentation that’s suitable for your end user to understand.

Janet implored listeners to take away from her talk that you need the context, you need to ensure that your testing is good enough, and that you need to have that documentation.  

All these things together are going to be critical to get that trust between you, your models, and your ultimate end users.  

Concluding thoughts

So: understand the use case of your models, and the amount of explanation you need to provide. Ensure your testing is robust and well documented so that it stands up to scrutiny. If you have a high-risk scenario, and you're in the EU, and for anyone who interacts with people in the EU, you must have an explanation for a human in the loop. Always ensure that you're not caught out by worldwide legislation and recommendations.  

Janet concluded by saying that she always recommends applying the highest standards for the areas you work in, and even keeping an eye on the standards outside of those areas to futureproof, because what we do see is that after legislation has been tested in one area, other areas tend to start copying and integrating similar legislation for themselves.

Improve your compliance process with an award-winning solution

Get in touch to see how our Intelligent Compliance solution can help your organisation transform your compliance; or request a demo to see it in action.

Follow us on LinkedIn and Twitter to hear about upcoming Napier events like this one.

By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Privacy Policy for more information.