Payments fraud is relentless and shows no sign of abating. According to trade association UK Finance, the UK saw a total of 1.4m cases of unauthorised financial fraud in the first six months of the year, with more than £600m stolen through scams including unauthorised card fraud and authorised push payment (APP) fraud, in which the victim unwittingly transfers funds into a scammer’s account.
These attacks are growing in complexity, constantly changing in terms of their structure or sequence, and are making different digital footprints and patterns. They are also undetectable for traditional rule-based logic and predictive models, such as data mining and business rule management systems (BRMS).
Unfortunately however, in the battle to prevent company accounts being paid to fraudulent recipients, these processes are both time-consuming and cumbersome.
But when it comes to combating payments fraud, AI has enormous potential. By being able to process large amounts of data and ‘learn’ to adapt to countless different scenarios, AI could be a gamechanger in the war against payments fraud.
Learning to fight
The power of AI lies in its ability to root out anomalies in a company’s large-scale data sets in a matter of milliseconds. The more data that there is for a machine learning model to process, the more accurate the predictive value will be.
So when it comes to payments fraud, every transaction waiting to be processed can be given a rating, as the algorithm will have learnt how to differentiate between a legitimate transaction and a fraudulent one.
At this stage, many financial institutions have yet to wake up to this fact. According to AI Innovation Playbook III, a recent survey by PYMNTS, in collaboration with AI company Brighterion just 5.5% of the financial institutions surveyed have adopted AI, and only 12.5% of decision makers who work in fraud detection rely upon it. In contrast, 92.5% of fraud detection and analysis decision makers prefer to use data mining, while 65% of professionals in the same area use BRMS.
Barriers to adoption
The problem, it seems, lies in fraud specialists believing that AI systems lack transparency (60%). Another 60% view it as complicated and time-consuming compared to data mining and BRMS, while 36.5% believe that they are unable to quantify related return on investment (ROI).
Paul Thomalla, Global Head of Payments at Finastra, believes that one of the main reasons why AI is not being taken up by financial institutions and businesses on a much bigger scale, is because the industry is still in the early stages of experimentation.
“There are few well-known fraud prevention case studies in the industry, and as with any new technology there will need to be proof before it can move past the conceptual stage,” he says. “Another reason is that there needs to be a change of mindset in approaches to fraud prevention.”
Thomalla continues. “The standard reactionary approach to fraud is more tangible and immediate, and therefore easier for businesses to invest time and money into prevention. AI’s more proactive approach of trying to predict future fraud is less immediate, and therefore more complicated.”
For Simon Shorthose, Managing Director at Kyriba Northern Europe, to make payment AI and money laundering fraud detection easier, it is necessary to have the largest possible visibility of payments, which also means building a large set of scenarios on an extended portfolio of formats.
“It is also necessary to monitor connectivity across the various IT systems in order to follow the payments during their full life cycle,” Shorthose says. “Few financial or business institutions have developed such robust processes.”
So what actually needs to change in order to make AI the industry standard in the battle against payments fraud?
Finastra’s Thomalla believes that essentially the adoption of AI in fraud prevention is a high-level conversation about moving from a rules-based approach to one that is based on the belief that AI can help win the war against payments fraud. This change in mindset and approach may take time, but it’s essential for businesses to start thinking more strategically about fraud prevention in the first instance.
“There’s no standardisation of AI in fraud prevention that I can see in the immediate future, as we are still very much in the early stages,” Thomalla says. “As more players implement AI and we see successful use-cases, then adoption will only increase.”
For Thomalla, a change of mindset about how fraud is tackled must be a priority. “It’s also important to recognise that AI should not replace traditional approaches, but is a new strategic play to support previous fraud prevention techniques,” he says.
It is early days, but the more the industry adopts AI and implements it in IT, it certainly seems that one day it could end up winning the payments fraud war.