Something we said? Don’t leave just yet!

For more information about latest events, news and insights, leave us your email address below.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form
Dismiss

Reduce false positives and add user-friendly explanations

A comprehensive insight into how Napier reduces false positives and adds user friendly explanation for screening in financial services.

Napier AI
June 15, 2021

Last week one of our legendary data scientists delivered her first virtual presentation at Women Who Code’s Connect Reimagine developer conference. Dr Heather Wilson, took to the screen to give attendees a comprehensive insight into how Napier reduces false positives and adds user friendly explanation for screening in financial services.

If you missed it, here’s a summary of the key points Heather made:

False positives are a huge challenge for screening

To comply with money laundering regulations, individuals, companies and transactions all need to be screened on an ongoing basis. However, around 5% of screened names will result in a hit, and 99.9% of these hits will be false positives. This means if a bank has one million clients to screen, 50,000 hits for manual review will be generated and of these, only 50 will be escalated into further action.

This is only the tip of the iceberg since screening is it isn’t a one-off activity. The time and resources required for screening in financial institutions are enormous, and at an opportunity cost; resources could be reallocated to focus on other anti-money laundering activities.

Napier’s AI Advisor uses machine learning to accelerate hit processing

With financial institutions facing overwhelming numbers of hits to review, Napier’s new screening engine, AI Advisor, is alleviating this problem by using machine learning to rapidly accelerate hit processing without any human assistance.

The screening engine is built as an add-on layer to existing wide net screening. When the wide net screening returns 50,000 hits for review (of which 99.9% will be false positives), the screening engine is able to eliminate the majority of the false positives. Analysts can then focus on the hits that really do require further investigation and human intuition.

Crucially, a hit that will take a human analyst on average 10 minutes to review will take the screening engine less than 0.25 seconds.  

Sophisticated algorithm for false positive reduction

Our screening application uses a machine learning algorithm to provide a baseline score of how likely, for example, two name strings like John Smith vs John Smyth are the same, and a baseline confidence level that the two records are likely to be the same. Then with each additional field (like date of birth), the confidence level adjusts, as well as the score that represents the similarity of two records, which goes up or down accordingly.

Taking into consideration the various types of spelling errors, Napier has also created a custom default dataset of English alphabet names covering all the types of mistakes that potentially exist in screening scenarios. Combined with careful feature value distribution, we are able to select the most informative samples to help us update our labels and thus persuade our model to learn different nuances in institutional specific screening. A good set of features and good feature value distributions helps the algorithm learn the nuance in the name string matching and improves it discriminative ability to reduce false positives.

Explainability is integral to Napier’s algorithm

Napier takes explainability very seriously, so it has always been integral to the development of AI Advisor. It is a common problem in machine learning, that the algorithm is often opaque and it is impossible for a human to understand why the algorithm has made a particular decision. This is an issue because financial institutions have a legal obligation to explain their screening decisions, so the tools they use will need to provide those explanations. What’s more, analysts need plenty of information to make decisions quickly.

Napier uses a post-hoc method to explain the effect of each feature on the model. We’ve utilised a tool called SHAP, which is a very close approximation to Shapley values, and commonly used in machine learning to explain the output for certain types of models. The Shapley values are indications of how much a particular feature contributes towards the decision the algorithm made.

Napier’s Active Learning Framework ensures continuous model improvement

We are currently working on building an Active Learning Framework for AI Advisor. This framework allows us to update the model, database or both based on individual institution business cases, with minimal manual intervention. The objective and benefit is to update the dataset based on customer behaviour and potentially spot new types of error. This will improve our system performance (including false positive reduction) by updating the data or model accordingly. The Active Learning Framework will be a continuous process after our system is deployed. It will also inform us if we need to introduce new features to adapt to changing data.

Book your demo of AI Advisor now

To see first-hand Napier’s advanced AI Advisor screening engine, book a demo today.

By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Privacy Policy for more information.