Artificial Intelligence and machine learning are touted as game-changers in the detection of financial crime but, despite some promising results, Paul Cochrane finds obstacles to roll-out, not least around regulators’ ability to green-light new technologies as well as effectively utilise the data they produce.
Drivers for change
Major financial institutions are investing in next generation technology driven, partly, by a need reduce the costs of complying with AML/CFT regulations, estimated at USD$23.5 billion a year in the USA, and in Europe close to USD$20 bn. (1) A further motivator is the exceedingly high rate of false positives derived from suspicious activity reports (SARs) under existing systems – running more than 90% of filings in the USA. (2)
“We have seen a rise in… technology vendors trying to address the cost base of AML compliance and to minimise false positives through employing AI and supervised machine learning techniques. This technology is typically employed as a second filter to try to improve effectiveness,” said Michael Shearer, Global Head of Product Management, Financial Crime and Threat Mitigation at HSBC in London.
Vendors such as US-based Ayasdi and FICO, and UK-based Quantexa, are developing such software, often in partnership with large financial institutions like HSBC, which have the capital to invest in such technology as well as harness big data sets for testing new systems.
Transaction monitoring systems (TMS) are the main focus, with AI segmenting clients according to their financial profile and behaviour based on past transactions. “AI solutions can be more surgical about the way financial crime is detected,” said Shearer. The industry standard approach can be blunt, but machine learning can build a complex set of rules based on behaviour which a human simply couldn’t write and apply them routinely across the whole customer book.”
The advantages of using such new technology are the automation of manual search and data entry processes, reducing duplicative work. A Celent report claims that adopting AI technology can lead to investigation times being sped up by 50% or more, while escalation rates can be improved by up to 30%. (3)
Dr Sebastian Hetzler, Vice President Product Management at FICO, an analytics company, told MLB that AI and machine learning can improve the quality and precision of SARs. “We have seen the number of SARs increase by 20 percent, and investigation process gains from efficiencies of up to 30%, so AI solves issues in both directions,” he said.
The education piece
Yet despite such filing improvements only a handful of regulators, in the US, the UK, Russia, Singapore and Hong Kong, are encouraging the development of machine learning to improve transaction monitoring systems. Few banks, outside of the major international players, are investing in AI systems. And in part, said Matthew Redhead, associate fellow, RUSI Centre for Financial Crime and Security Studies in London, this is due to uncertainty as to how AI will be received by regulators.
Model validation is a particularly thorny issue as financial institutions have to be able to explain the analyis and output to regulators. “It is nightmarish, and massively difficult to resolve,” said Redhead.
Indicative of the difficulty is anomalies that pop up live. “The beauty of using some different machine learning is the identity of the anomaly, as it appears in real time. But if you don’t understand why the parameters are moving, you will not get the benefit,” said Redhead. The model validation problem for TMS is down to the fluidity of the data and the mutability of the algorithms as they seek to identify patterns in transactions - “because the content the algorithm is learning from (the transactional data set) is changing all the time, so what looks odd and anomalous is also likely to change over time,” said Redhead.
“This is incredibly valuable when trying to identify changes in potential criminal behaviour. But the sheer dynamism of such machine learning can be impossible to explain and repeat for a regulator” and so meet their traditional demands for explicability and repeatability.”
Banks and vendors are confident, however, that they can effectively demonstrate how a system works and the derived benefits - citing the use of real-time financial crime technologies already in use to detect fraud and the processing of payments. “We are learning from our experience fighting fraud and applying that to AML,” said Shearer. “There are significant similarities in the technology to understand the customer and detect behaviour of concern over time.”
To Hetzler, explainable AI is a “potential game changer.” While he agreed that there is “fear of AI as a black box while in a world of rules, everything is explainable and transparent.” AI offers many opportunities for banks without getting into conflict with regulators who still demand a rules based approach, he said.
With regulators not requiring the adoption of AI, financial institutions using AI are adopting a dual approach, keeping current rules-based systems in place supported by a supervised AI model. “In a rules-based world, I see AI as augmenting systems not replacing them. It is an evolution not a revolution,” said Hetzler.
The bigger picture
For AI to be effective and be a real ‘game changer’, regulators and financial intelligence units (FIUs) will need to be able to handle the data passed to them, said Hetzler who noted such discussions were already underway in some quarters. “We are talking with them about AI and cognitive analytics to improve their work, and connect different SARs and so on,” he said.
The ideal solution, which Redhead said sounds a “bit utopian”, looks toward a sectoral or even national approach that collates all the data generated, rather than individual institutions applying their own versions of AI for screening and transactions. This would require enhanced cooperation between the public and private sectors, in addition to amending data privacy laws, such as the European Union’s General Data Protection Regulation (GDPR) so that such data remains legal. “The problem I see is a real reticence on the part of regulators and law enforcement to even consider this as an option,” said Redhead.
Tom Neylan, Senior Policy Analyst – AML/CFT at the Financial Action Task Force (FATF), said SAR confidentiality requirements could enable such cooperation, with FATF “encouraging a more permissive approach to sharing information by national regulators.”
However, there are doubts as to how effectively regulators and enforcement agencies will be able to handle the new technology. “At government agencies technical skills are at a premium, and the gap compared to the private sector is massive. There is a war for talent in this area,” said Redhead.
The surge in SAR filings that AI could bring is a further issue, even in the USA despite the US Treasury backing the financial sector’s adoption of AI. “I’m not optimistic that governments and law enforcement will respond well to the explosion of financial intelligence data and the adoption of AI. In the USA, we cannot keep up with the approximately 19 million pieces of Banking Secrecy Act (BSA) financial intelligence we have now,” said John Cassara, a former US Treasury special agent.
“Innovative management and personnel practices are effectively discouraged. Treasury’s Financial Crimes Enforcement Network (FinCEN) does not have a good track record with analytic systems of all sorts. An early version of AI at FinCEN was allowed to atrophy and eventually abandoned. Government IT expertise in financial intelligence is woefully behind the times.”
Global roll out of AI will depend on regulators backing such initiatives and inevitably, guidance from FATF on the adoption, and impact, of such innovation would help. There is no current official FATF policy on AI although it has been discussed by the body as part of a wider programme into how technology is changing AML and CFT.
The Chinese presidency of FATF this year has a particular focus on AML supervision, announced last November (2019). “One theme was technology, not just how supervisors can use it, but also how they need to adjust their approach to technology used by companies. This is a new challenge,” Neylan noted.
Another potential risk is that larger financial institutions push ahead with AI to the detriment of smaller banks that cannot invest in AI technology, in advanced as well as less advanced jurisdictions. Currently there is interest from banks in the US, Europe, Asia-Pacific, and in certain African countries such as South Africa, said Hetzler, who said this was “a big challenge… to smaller tier three and tier four banks, as they can’t afford or invest in the same way as larger banks.” He said: “We are working on a consortium approach to give smaller banks access to the technologies - banks sends anonymised data, and we pool data and feed back models to banks, so AI is a service in a way.”
If some of the obstacles to implementing are overcome, the adoption of AML AI may indeed take off in the next few years.