AI is a two-sided coin for banks: while it’s unlocking many possibilities for more efficient operations, it can also pose external and internal risks.
Financial criminals are leveraging the technology to produce deepfake videos, voices and fake documents that can get past computer and human detection, or to supercharge email fraud activities. In the US alone, generative AI is expected to accelerate fraud losses to an annual growth rate of 32%, reaching US$40 billion by 2027, according to a recent report by Deloitte.
Perhaps, then, the response from banks should be to arm themselves with even better tools, harnessing AI across financial crime prevention. Financial institutions are in fact starting to deploy AI in anti-financial crime (AFC) efforts – to monitor transactions, generate suspicious activity reports, automate fraud detection and more. These have the potential to accelerate processes while increasing accuracy.
The issue is when banks don’t balance the implementation of AI with human judgment. Without a human in the loop, AI adoption can affect compliance, bias, and adaptability to new threats.
We believe in a cautious, hybrid approach to AI adoption in the financial sector, one that will continue to require human input.
The difference between rules-based and AI-driven AFC systems
Traditionally, AFC – and in particular anti-money laundering (AML) systems – have operated with fixed rules set by compliance teams in response to regulations. In the case of transaction monitoring, for example, these rules are implemented to flag transactions based on specific predefined criteria, such as transaction amount thresholds or geographical risk factors.
AI presents a new way of screening for financial crime risk. Machine learning models can be used to detect suspicious patterns based on a series of datasets that are in constant evolution. The system analyzes transactions, historical data, customer behavior, and contextual data to monitor for anything suspicious, while learning over time, offering adaptive and potentially more effective crime monitoring.
However, while rules-based systems are predictable and easily auditable, AI-driven systems introduce a complex “black box” element due to opaque decision-making processes. It is harder to trace an AI system’s reasoning for flagging certain behavior as suspicious, given that so many elements are involved. This can see the AI reach a certain conclusion based on outdated criteria, or provide factually incorrect insights, without this being immediately detectable. It can also cause problems for a financial institution’s regulatory compliance.
Possible regulatory challenges
Financial institutions have to adhere to stringent regulatory standards, such as the EU’s AMLD and the US’s Bank Secrecy Act, which mandate clear, traceable decision-making. AI systems, especially deep learning models, can be difficult to interpret.
To ensure accountability while adopting AI, banks need careful planning, thorough testing, specialized compliance frameworks and human oversight. Humans can validate automated decisions by, for example, interpreting the reasoning behind a flagged transaction, making it explainable and defensible to regulators.
Financial institutions are also under increasing pressure to use Explainable AI (XAI) tools to make AI-driven decisions understandable to regulators and auditors. XAI is a process that enables humans to comprehend the output of an AI system and its underlying decision making.
Human judgment required for holistic view
Adoption of AI can’t give way to complacency with automated systems. Human analysts bring context and judgment that AI lacks, allowing for nuanced decision-making in complex or ambiguous cases, which remains essential in AFC investigations.
Among the risks of dependency on AI are the possibility of errors (e.g. false positives, false negatives) and bias. AI can be prone to false positives if the models aren’t well-tuned, or are trained on biased data. While humans are also susceptible to bias, the added risk of AI is that it can be difficult to identify bias within the system.
Furthermore, AI models run on the data that is fed to them – they may not catch novel or rare suspicious patterns outside historical trends, or based on real world insights. A full replacement of rules-based systems with AI could leave blind spots in AFC monitoring.
In cases of bias, ambiguity or novelty, AFC needs a discerning eye that AI cannot provide. At the same time, if we were to remove humans from the process, it could severely stunt the ability of your teams to understand patterns in financial crime, spot patterns, and identify emerging trends. In turn, that could make it harder to keep any automated systems up to date.
A hybrid approach: combining rules-based and AI-driven AFC
Financial institutions can combine a rules-based approach with AI tools to create a multi-layered system that leverages the strengths of both approaches. A hybrid system will make AI implementation more accurate in the long run, and more flexible in addressing emerging financial crime threats, without sacrificing transparency.
To do this, institutions can integrate AI models with ongoing human feedback. The models’ adaptive learning would therefore not only grow based on data patterns, but also on human input that refines and rebalances it.
Not all AI systems are equal. AI models should undergo continuous testing to evaluate accuracy, fairness, and compliance, with regular updates based on regulatory changes and new threat intelligence as identified by your AFC teams.
Risk and compliance experts must be trained in AI, or an AI expert should be hired to the team, to ensure that AI development and deployment is executed within certain guardrails. They must also develop compliance frameworks specific to AI, establishing a pathway to regulatory adherence in an emerging sector for compliance experts.
As part of AI adoption, it’s important that all elements of the organization are briefed on the capabilities of the new AI models they’re working with, but also their shortcomings (such as potential bias), in order to make them more perceptive to potential errors.
Your organization must also make certain other strategic considerations in order to preserve security and data quality. It’s essential to invest in high-quality, secure data infrastructure and ensure that they are trained on accurate and diverse datasets.
AI is and will continue to be both a threat and a defensive tool for banks. But they need to handle this powerful new technology correctly to avoid creating problems rather than solving them.
The post AI and Financial Crime Prevention: Why Banks Need a Balanced Approach appeared first on Unite.AI.