Finance has become a top target for deepfake-enabled fraud in the KYC process, undermining the integrity of identity-verification frameworks that help counter-terrorism financing (CTF) and anti-money laundering (AML) systems.
Experts have found a rise in suspicious activity using AI-generated media, highlighting that threat actors exploit GenAI to “defraud… financial institutions and their customers.”
Wall Street’s FINRA has warned that deepfake audio and video scams can cause losses of $40 billion by 2027 in the finance sector.
Biometric safety measures do not work anymore. A 2024 Regula research revealed that 49% businesses throughout industries such as fintech and banking have faced fraud attacks using deepfakes, with average losses of $450,000 per incident.
As these numbers rise, it becomes important to understand how deepfake invasion can be prevented to protect customers and the financial industry globally.
More than 1,100 deepfake attacks in Indonesia
Last year, an Indonesian bank reported over 1,100 attempts to escape its digital KYC loan-application process within 3 months, cybersecurity firm Group-IB reports.
Threat actors teamed AI-powered face-swapping with virtual-camera tools to imitate the bank’s liveness-detection controls, despite the bank’s “robust, multi-layered security measures." According to Forbes, the estimated losses “from these intrusions have been estimated at $138.5 million in Indonesia alone.”
The AI-driven face-swapping tools allowed actors to replace the target’s facial features with those of another person, allowing them to exploit “virtual camera software to manipulate biometric data, deceiving institutions into approving fraudulent transactions,” Group-IB reports.
How does the deepfake KYC fraud work
Scammers gather personal data via malware, the dark web, social networking sites, or phishing scams. The date is used to mimic identities.
After data acquisition, scammers use deepfake technology to change identity documents, swapping photos, modifying details, and re-creating entire ID to escape KYC checks.
Threat actors then use virtual cameras and prerecorded deepfake videos, helping them avoid security checks by simulating real-time interactions.
This highlights that traditional mechanisms are proving to be inadequate against advanced AI scams. A study revealed that every 5 minutes, one deepfake attempt was made. Only 0.1 of people could spot deepfakes.