Something we said? Don’t leave just yet!

For more information about latest events, news and insights, leave us your email address below.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form
Dismiss

Explainable AI: What’s needed for AML

Dr Janet Bastiman gave a talk for IBF Singapore on the need for and the history of explainability in AI, she also did some mythbusting and discussed what regulators have to say about explainability.

Napier AI
September 16, 2021

Napier Chief Data Scientist Dr Janet Bastiman gave a talk for the Institute of Banking and Finance (IBF) Singapore recently on the need for and the history of explainability in AI. She also discussed some of the biggest myths that have been perpetuated around explainability in AI, what regulators have to say about the topic in various regions globally.

The playback of her talk is included below, and we’ve taken the liberty of summarising some of the key takeaways here.

Explainable AI makes sophisticated technology accessible

Nobody is comfortable blindly accepting decisions. We either devolve the decision to someone that we trust is more informed to make the decision than ourselves, or we seek to understand and rationalise the decision that was made before agreeing to it.

As more information becomes available to us, the extent to which we devolve that trust is decreasing.

On the one hand, it’s good to be informed and keep an ear to the ground, but there’s also the risk to consider that partial understanding of a topic can lead us to incorrect conclusions.

For years, financial institutions have been reticent to use AI-based technologies, because they were an opaque box that could not be understood. This made it difficult to understand the outputs and build a case based on the AI-assigned score.

Fortunately, with the requirements of regulators across multiple industries, things are changing as explainable AI is making sophisticated anti-financial crime systems more accessible to non-data scientists.

The history of explainable AI

To understand why AI hasn’t always been explainable, let’s look at its origins:

In the 1950s, the definition of AI was quite broad: any system that makes a decision that appears to be intelligent from its inputs. Before the advent of obfuscated algorithms, we could directly interpret earlier systems from their code.

Due to a statement by one of the inventors of the Perceptron (Marvin Minsky) and a colleague of his (Seymour Papert) where they said that multi-layer perceptrons couldn’t do more complicated problems, funding and interest stopped and AI reverted to more traditional, directly explainable techniques.

In the 1980s, some of the errors in Minsky’s paper were corrected, and there was a resurgence in neural network research.

From the 1990s through to the 2010s, there has been a focus on performance, maximising accuracy, and minimising inherent bias in the system from unbalanced data.

While these are good things, decades of research has created AI systems that perform exceptionally well on test data by constructing an abstract internal representation of their problems, and the explanation of how they get to their results has not been of importance until the last five years.

The influence of the DARPA study (which began in 2016) is also notable, as it aims to promote explainable AI, or ‘XAI’. However, the initial call for proposals coined a now commonly perpetuated (but inaccurate) sentiment that “although these more opaque models are more effective, they are less explainable.”

“Although these more opaque models are more effective, they are less explainable.”

This essentially led to the assumption that explainability and performance are mutually exclusive in AI systems.

4 AI myths

Myths and misconceptions surrounding AI run rife, with some even enforced by AI practitioners themselves. It’s important for several reasons that AI be explainable and that misconceptions about AI and AI explainability are debunked, and to understand why it is that they perpetuate.

It is also important to end the arbitrary gatekeeping of knowledge through sharing information and building AI models that are explainable and keep the end user informed.

Myth #1: You can’t have explainability and accuracy

The issue with this myth is that binary comparisons cause misconceptions.

Bust :

• This statement is based on a false equivalency

• This phrase is often quoted out of context

• Good science shows that context is key

Myth #2: You can’t explain things for end users

This myth arises from the belief among some AI practitioners that AI explanations should be reserved for AI experts only.

Bust:

• It requires less effort if you don’t have to convert mathematical outputs into a natural language explanation

• Over-complication of systems can make them sound more impressive than they really are and explainability threatens that

Myth #3: Testing is better than explanations

Many prominent AI researchers say testing is all that’s necessary to trust that an AI system works, so if it stands up to testing it doesn’t need to be explained.

Bust:

• This is a logical fallacy – there are more than just two options

• Past performance is not indicative of future results, especially when it comes to messy, real-world data

• Test results don’t build cases

Myth #4: You just need to understand probabilities

The issue with probabilities and statistics more generally, is that they are not as objective as they are often made out to be as interpretations can vary among people.

Bust:

• Probabilities are viewed through bias: matching expectations and we agree, opposing expectations and we say it’s wrong

• 95% sounds like a high, promising statistic, but doesn’t account for the potential impact of the 5%

• Percentages alone are not enough to gather a full picture

Regulation around explainable AI

Legislation around explainable AI is new and still developing. Despite the best efforts of legislators there’s a wide breadth for interpretation of regulation around AI Explainability. It is a challenge to create legislation that is specific enough to give useful guidance but not so specific as to  become outdated soon after issuing.

Regulatory approaches and legislation differs between regions,so organisations looking to keep up with legislation around explainable AI should be mindful of:

• Balancing the rights of individuals and their data, and the need to have a cohesive view of data around money flow

• Ensuring AI systems are fully documented, the same as any other rule or process brought in to add value to the business

• Futureproofing: as legislation moves towards promoting explainability, ensuring your AI systems are explainable ensures you are ahead of the curve

When explainable AI shouldn’t be used

• In the AML industry, individuals and their data are actively working against us to find weak points in models. Detailed explanations may hint at ways to exploit the system.

• When risks upsetting the balance between risk and reward. There is no need to explain beyond what is necessary at a user level, as this can over expose the system's internal workings which can create risk.

• If you wouldn’t ask a human for an explanation. If you’re happy with the risk profile of a result and aren’t required by legislation to provide an explanation, then there’s no need to add an additional step in the process.

It’s time to bring back explainable AI

A lack of explainability is a cause for concern in human interactions, so it’s important to hold AI systems and practitioners to this same standard. Over complication of a concept is often used to make it sound more impressive, or to save the effort of generating an accessible, non-mathematical explanation for end users. It is this inaccessibility which takes the knowledge, and crucially the power, out of the hands of end users.

In its infancy in the 1950s, AI was explainable and could be directly interpreted with relative ease by users. Although the process could get complex, it was still easy to trace the inputs to the outputs and to explain the system in natural language in terms of an “if this then that” decision tree.

The last few decades have seen AI’s performance prioritised over its explainability, and the misconception has arisen that explainability and performance in AI models are mutually exclusive, namely that opaque models are more effective.

By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Privacy Policy for more information.