Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Deepfake. Show all posts

How AI Impacts KYC and Financial Security

How AI Impacts KYC and Financial Security

Finance has become a top target for deepfake-enabled fraud in the KYC process, undermining the integrity of identity-verification frameworks that help counter-terrorism financing (CTF) and anti-money laundering (AML) systems.

Experts have found a rise in suspicious activity using AI-generated media, highlighting that threat actors exploit GenAI to “defraud… financial institutions and their customers.”

Wall Street’s FINRA has warned that deepfake audio and video scams can cause losses of $40 billion by 2027 in the finance sector.

Biometric safety measures do not work anymore. A 2024 Regula research revealed that 49% businesses throughout industries such as fintech and banking have faced fraud attacks using deepfakes, with average losses of $450,000 per incident. 

As these numbers rise, it becomes important to understand how deepfake invasion can be prevented to protect customers and the financial industry globally. 

More than 1,100 deepfake attacks in Indonesia

Last year, an Indonesian bank reported over 1,100 attempts to escape its digital KYC loan-application process within 3 months, cybersecurity firm Group-IB reports.

Threat actors teamed AI-powered face-swapping with virtual-camera tools to imitate the bank’s  liveness-detection controls, despite the bank’s “robust, multi-layered security measures." According to Forbes, the estimated losses “from these intrusions have been estimated at $138.5 million in Indonesia alone.”

The AI-driven face-swapping tools allowed actors to replace the target’s facial features with those of another person, allowing them to exploit “virtual camera software to manipulate biometric data, deceiving institutions into approving fraudulent transactions,” Group-IB reports.

How does the deepfake KYC fraud work

Scammers gather personal data via malware, the dark web, social networking sites, or phishing scams. The date is used to mimic identities. 

After data acquisition, scammers use deepfake technology to change identity documents, swapping photos, modifying details, and re-creating entire ID to escape KYC checks.

Threat actors then use virtual cameras and prerecorded deepfake videos, helping them avoid security checks by simulating real-time interactions. 

This highlights that traditional mechanisms are proving to be inadequate against advanced AI scams. A study revealed that every 5 minutes, one deepfake attempt was made. Only 0.1 of people could spot deepfakes. 

U.S. Senators Propose New Task Force to Tackle AI-Based Financial Scams

 


In response to the rising threat of artificial intelligence being used for financial fraud, U.S. lawmakers have introduced a new bipartisan Senate bill aimed at curbing deepfake-related scams.

The bill, called the Preventing Deep Fake Scams Act, has been brought forward by Senators from both political parties. If passed, it would lead to the formation of a new task force headed by the U.S. Department of the Treasury. This group would bring together leaders from major financial oversight bodies to study how AI is being misused in scams, identity theft, and data-related crimes and what can be done about it.

The proposed task force would include representatives from agencies such as the Federal Reserve, the Consumer Financial Protection Bureau, and the Federal Deposit Insurance Corporation, among others. Their goal will be to closely examine the growing use of AI in fraudulent activities and provide the U.S. Congress with a detailed report within a year.


This report is expected to outline:

• How financial institutions can better use AI to stop fraud before it happens,

• Ways to protect consumers from being misled by deepfake content, and

• Policy and regulatory recommendations for addressing this evolving threat.


One of the key concerns the bill addresses is the use of AI to create fake voices and videos that mimic real people. These deepfakes are often used to deceive victims—such as by pretending to be a friend or family member in distress—into sending money or sharing sensitive information.

According to official data from the Federal Trade Commission, over $12.5 billion was stolen through fraud in the past year—a 25% increase from the previous year. Many of these scams now involve AI-generated messages and voices designed to appear highly convincing.

While this particular legislation focuses on financial scams, it adds to a broader legislative effort to regulate the misuse of deepfake technology. Earlier this year, the U.S. House passed a bill targeting nonconsensual deepfake pornography. Meanwhile, law enforcement agencies have warned that fake messages impersonating high-ranking officials are being used in various schemes targeting both current and former government personnel.

Another Senate bill, introduced recently, seeks to launch a national awareness program led by the Commerce Department. This initiative aims to educate the public on how to recognize AI-generated deception and avoid becoming victims of such scams.

As digital fraud evolves, lawmakers are urging financial institutions, regulators, and the public to work together in identifying threats and developing solutions that can keep pace with rapidly advancing technologies.

AI Can Create Deepfake Videos of Children Using Just 20 Images, Expert Warns

 

Parents are being urged to rethink how much they share about their children online, as experts warn that criminals can now generate realistic deepfake videos using as few as 20 images. This alarming development highlights the growing risks of digital identity theft and fraud facing children due to oversharing on social media platforms.  

According to Professor Carsten Maple of the University of Warwick and the Alan Turing Institute, modern AI tools can construct highly realistic digital profiles, including 30-second deepfake videos, from a small number of publicly available photos. These images can be used not only by criminal networks to commit identity theft, open fraudulent accounts, or claim government benefits in a child’s name but also by large tech companies to train their algorithms, often without the user’s full awareness or consent. 

New research conducted by Perspectus Global and commissioned by Proton surveyed 2,000 UK parents of children under 16. The findings show that on average, parents upload 63 images to social media every month, with 59% of those being family-related. A significant proportion of parents—21%—share these photos multiple times a week, while 38% post several times a month. These frequent posts not only showcase images but also often contain sensitive data like location tags and key life events, making it easier for bad actors to build a detailed online profile of the child. Professor Maple warned that such oversharing can lead to long-term consequences. 

Aside from potential identity theft, children could face mental distress or reputational harm later in life from having a permanent digital footprint that they never consented to create. The problem is exacerbated by the fact that many parents are unaware of how their data is being used. For instance, 48% of survey respondents did not realize that cloud storage providers can access the data stored on their platforms. In fact, more than half of the surveyed parents (56%) store family images on cloud services such as Google Drive or Apple iCloud. On average, each parent had 185 photos of their children stored digitally—images that may be accessed or analyzed under vaguely worded terms and conditions.  

Recent changes to Instagram’s user agreement, which now allows the platform to use uploaded images to train its AI systems, have further heightened privacy concerns. Additionally, experts have warned about the use of personal images by other Big Tech firms to enhance facial recognition algorithms and advertising models. To protect their children, parents are advised to implement a range of safety measures. These include using secure and private cloud storage, adjusting privacy settings on social platforms, avoiding public Wi-Fi when sharing or uploading data, and staying vigilant against phishing scams. 

Furthermore, experts recommend setting boundaries with children regarding online activity, using parental controls, antivirus tools, and search filters, and modeling responsible digital behavior. The growing accessibility of AI-based image manipulation tools underscores the urgent need for greater awareness and proactive digital hygiene. What may seem like harmless sharing today could expose children to significant risks in the future.

Cybercriminals Employ Fake AI tools to Propagate the Infostealer Noodlophile

 

A new family of malware that steals information, dubbed 'Noodlophile,' is being spread using fake AI-powered video generating tools that pose as generated media content.

The websites are promoted on Facebook groups with a high level of visibility and use catchy names like the "Dream Machine" to make themselves seem like sophisticated artificial intelligence tools that create videos from user files that are uploaded. The latest effort by Morphisec adds a new infostealer to the mix, even though the idea of using AI tools to spread malware is not new and has been used by experienced hackers. 

Morphisec claims that Noodlophile is a new malware-as-a-service enterprise associated with Vietnamese-speaking operators because it is being offered for sale on dark web forums, often in conjunction with "Get Cookie + Pass" services. 

Once the victim visits the malicious website and submits their files, they are given a ZIP folder that is intended to include an artificial intelligence film. Instead, the ZIP includes a fraudulently called application (Video Dream MachineAI.mp4.exe) as well as a hidden folder containing numerous files required for following phases. If a Windows user disables file extensions (which should never be done), the file will appear to be an MP4 video file. 

"The file Video Dream MachineAI.mp4.exe is a 32-bit C++ application signed using a certificate created via Winauth," notes Morphisec."Despite its misleading name (suggesting an .mp4 video), this binary is actually a repurposed version of CapCut, a legitimate video editing tool (version 445.0). This deceptive naming and certificate help it evade user suspicion and some security solutions.”

Double-clicking on the fraudulent MP4 will open a sequence of executables, culminating in the launch of a batch script (Document.docx/install.bat). The script uses the genuine Windows program 'certutil.exe' to decode and extract a base64-encoded password-protected RAR package masquerading as a PDF document. At the same time, it creates a new registry key for persistence.

Subsequently, the script runs'srchost.exe,' which executes an obfuscated Python script (randomuser2025.txt) retrieved from a hardcoded remote server address, ultimately executing the Noodlophile Stealer in memory. If Avast is found on the infected system, PE hollowing is employed to inject the payload into RegAsm.exe. Shellcode injection is used for in-memory execution. 

The best defence against malware is to stay away from files downloaded and run from unidentified websites. Always check file extensions before opening them, and run an antivirus scan on any downloaded files before running them.

SBI Issues Urgent Warning Against Deepfake Scam Videos Promoting Fake Investment Schemes

 

The State Bank of India (SBI) has issued an urgent public advisory warning customers and the general public about the rising threat of deepfake scam videos. These videos, circulating widely on social media, falsely claim that SBI has launched an AI-powered investment scheme in collaboration with the Government of India and multinational corporations—offering unusually high returns. 

SBI categorically denied any association with such platforms and urged individuals to verify investment-related information through its official website, social media handles, or local branches. The bank emphasized that it does not endorse or support any investment services that promise guaranteed or unrealistic returns. 

In a statement published on its official X (formerly Twitter) account, SBI stated: “State Bank of India cautions all its customers and the general public about many deepfake videos being circulated on social media, falsely claiming the launch of an AI-based platform showcasing lucrative investment schemes supported by SBI in association with the Government of India and some multinational companies. These videos misuse technology to create false narratives and deceive people into making financial commitments in fraudulent schemes. We clarify that SBI does not endorse any such schemes that promise unrealistic or unusually high returns.” 

Deepfake technology, which uses AI to fabricate convincing videos by manipulating facial expressions and voices, has increasingly been used to impersonate public figures and create fake endorsements. These videos often feature what appear to be real speeches or statements by senior officials or celebrities, misleading viewers into believing in illegitimate financial products. This isn’t an isolated incident. Earlier this year, a deepfake video showing India’s Union Finance Minister allegedly promoting an investment platform was debunked by the government’s PIB Fact Check. 

The video, which falsely claimed that viewers could earn a steady daily income through the platform, was confirmed to be digitally altered. SBI’s warning is part of a broader effort to combat the misuse of emerging technologies for financial fraud. The bank is urging everyone to remain cautious and to avoid falling prey to such digital deceptions. Scammers are increasingly using advanced AI tools to exploit public trust and create a false sense of legitimacy. 

To protect themselves, customers are advised to verify any financial information or offers through official SBI channels. Any suspicious activity or misleading promotional material should be reported immediately. SBI’s proactive communication reinforces its commitment to safeguarding customers in an era where financial scams are becoming more sophisticated. The bank’s message is clear: do not trust any claims about investment opportunities unless they come directly from verified, official sources.

Fake Candidates, Real Threat: Deepfake Job Applicants Are the New Cybersecurity Challenge

 

When voice authentication firm Pindrop Security advertised an opening for a senior engineering role, one resume caught their attention. The candidate, a Russian developer named Ivan, appeared to be a perfect fit on paper. But during the video interview, something felt off—his facial expressions didn’t quite match his speech. It turned out Ivan wasn’t who he claimed to be.

According to Vijay Balasubramaniyan, CEO and co-founder of Pindrop, Ivan was a fraudster using deepfake software and other generative AI tools in an attempt to secure a job through deception.

“Gen AI has blurred the line between what it is to be human and what it means to be machine,” Balasubramaniyan said. “What we’re seeing is that individuals are using these fake identities and fake faces and fake voices to secure employment, even sometimes going so far as doing a face swap with another individual who shows up for the job.”

While businesses have always had to protect themselves against hackers targeting vulnerabilities, a new kind of threat has emerged: job applicants powered by AI who fake their identities to gain employment. From forged resumes and AI-generated IDs to scripted interview responses, these candidates are part of a fast-growing trend that cybersecurity experts warn is here to stay.

In fact, a Gartner report predicts that by 2028, 1 in 4 job seekers globally will be using some form of AI-generated deception.

The implications for employers are serious. Fraudulent hires can introduce malware, exfiltrate confidential data, or simply draw salaries under false pretenses.

A Growing Cybercrime Strategy

This problem is especially acute in cybersecurity and crypto startups, where remote hiring makes it easier for scammers to operate undetected. Ben Sesser, CEO of BrightHire, noted a massive uptick in these incidents over the past year.

“Humans are generally the weak link in cybersecurity, and the hiring process is an inherently human process with a lot of hand-offs and a lot of different people involved,” Sesser said. “It’s become a weak point that folks are trying to expose.”

This isn’t a problem confined to startups. Earlier this year, the U.S. Department of Justice disclosed that over 300 American companies had unknowingly hired IT workers tied to North Korea. The impersonators used stolen identities, operated via remote networks, and allegedly funneled salaries back to fund the country’s weapons program.

Criminal Networks & AI-Enhanced Resumes

Lili Infante, founder and CEO of Florida-based CAT Labs, says her firm regularly receives applications from suspected North Korean agents.

“Every time we list a job posting, we get 100 North Korean spies applying to it,” Infante said. “When you look at their resumes, they look amazing; they use all the keywords for what we’re looking for.”

To filter out such applicants, CAT Labs relies on ID verification companies like iDenfy, Jumio, and Socure, which specialize in detecting deepfakes and verifying authenticity.

The issue has expanded far beyond North Korea. Experts like Roger Grimes, a longtime computer security consultant, report similar patterns with fake candidates originating from Russia, China, Malaysia, and South Korea.

Ironically, some of these impersonators end up excelling in their roles.

“Sometimes they’ll do the role poorly, and then sometimes they perform it so well that I’ve actually had a few people tell me they were sorry they had to let them go,” Grimes said.

Even KnowBe4, the cybersecurity firm Grimes works with, accidentally hired a deepfake engineer from North Korea who used AI to modify a stock photo and passed through multiple background checks. The deception was uncovered only after suspicious network activity was flagged.

What Lies Ahead

Despite a few high-profile incidents, most hiring teams still aren’t fully aware of the risks posed by deepfake job applicants.

“They’re responsible for talent strategy and other important things, but being on the front lines of security has historically not been one of them,” said BrightHire’s Sesser. “Folks think they’re not experiencing it, but I think it’s probably more likely that they’re just not realizing that it’s going on.”

As deepfake tools become increasingly realistic, experts believe the problem will grow harder to detect. Fortunately, companies like Pindrop are already developing video authentication systems to fight back. It was one such system that ultimately exposed “Ivan X.”

Although Ivan claimed to be in western Ukraine, his IP address revealed he was operating from a Russian military base near North Korea, according to the company.

Pindrop, backed by Andreessen Horowitz and Citi Ventures, originally focused on detecting voice-based fraud. Today, it may be pivoting toward defending video and digital hiring interactions.

“We are no longer able to trust our eyes and ears,” Balasubramaniyan said. “Without technology, you’re worse off than a monkey with a random coin toss.”

AI Impersonations: Revealing the New Frontier of Scamming

 


In the age of rapidly evolving artificial intelligence (AI), a new breed of frauds has emerged, posing enormous risks to companies and their clients. AI-powered impersonations, capable of generating highly realistic voice and visual content, have become a major threat that CISOs must address.

This article explores the multifaceted risks of AI-generated impersonations, including their financial and security impacts. It also provides insights into risk mitigation and a look ahead at combating AI-driven scams.

AI-generated impersonations have ushered in a new era of scam threats. Fraudsters now use AI to create unexpectedly trustworthy audio and visual content, such as vocal cloning and deepfake technology. These enhanced impersonations make it harder for targets to distinguish between genuine and fraudulent content, leaving them vulnerable to various types of fraud.

The rise of AI-generated impersonations has significantly escalated risks for companies and clients in several ways:

  • Enhanced realism: AI tools generate highly realistic audio and visuals, making it difficult to differentiate between authentic and fraudulent content. This increased realism boosts the success rate of scams.
  • Scalability and accessibility: AI-powered impersonation techniques can be automated and scaled, allowing fraudsters to target multiple individuals quickly, expanding their reach and impact.
  • Deepfake threats: AI-driven deepfake technology lets scammers create misleading images or videos, which can destroy reputations, spread fake news, or manipulate video evidence.
  • Voice cloning: AI-enabled voice cloning allows fraudsters to replicate a person’s voice and speech patterns, enabling phone-based impersonations and fraudulent actions by impersonating trusted figures.

Prevention tips: As AI technology evolves, so do the risks of AI-generated impersonations. Organizations need a multifaceted approach to mitigate these threats. Using sophisticated detection systems powered by AI can help identify impersonations, while rigorous employee training and awareness initiatives are essential. CISOs, AI researchers, and industry professionals must collaborate to build proactive defenses against these scams.

UIUC Researchers Expose Security Risks in OpenAI's Voice-Enabled ChatGPT-4o API, Revealing Potential for Financial Scams

 

Researchers recently revealed that OpenAI’s ChatGPT-4o voice API could be exploited by cybercriminals for financial scams, showing some success despite moderate limitations. This discovery has raised concerns about the misuse potential of this advanced language model.

ChatGPT-4o, OpenAI’s latest AI model, offers new capabilities, combining text, voice, and vision processing. These updates are supported by security features aimed at detecting and blocking malicious activity, including unauthorized voice replication.

Voice-based scams have become a significant threat, further exacerbated by deepfake technology and advanced text-to-speech tools. Despite OpenAI’s security measures, researchers from the University of Illinois Urbana-Champaign (UIUC) demonstrated how these protections could still be circumvented, highlighting risks of abuse by cybercriminals.

Researchers Richard Fang, Dylan Bowman, and Daniel Kang emphasized that current AI tools may lack sufficient restrictions to prevent misuse. They pointed out the risk of large-scale scams using automated voice generation, which reduces the need for human effort and keeps operational costs low.

Their study examined a variety of scams, including unauthorized bank transfers, gift card fraud, cryptocurrency theft, and social media credential theft. Using ChatGPT-4o’s voice capabilities, the researchers automated key actions like navigation, data input, two-factor authentication, and following specific scam instructions.

To bypass ChatGPT-4o’s data protection filters, the team used prompt “jailbreaking” techniques, allowing the AI to handle sensitive information. They simulated interactions with ChatGPT-4o by acting as gullible victims, testing the feasibility of different scams on real websites.

By manually verifying each transaction, such as those on Bank of America’s site, they found varying success rates. For example, Gmail credential theft was successful 60% of the time, while crypto-related scams succeeded in about 40% of attempts.

Cost analysis showed that carrying out these scams was relatively inexpensive, with successful cases averaging $0.75. More complex scams, like unauthorized bank transfers, cost around $2.51—still low compared to the potential profits such scams might yield.

OpenAI responded by emphasizing that their upcoming model, o1-preview, includes advanced safeguards to prevent this type of misuse. OpenAI claims that this model significantly outperforms GPT-4o in resisting unsafe content generation and handling adversarial prompts.

OpenAI also highlighted the importance of studies like UIUC’s for enhancing ChatGPT’s defenses. They noted that GPT-4o already restricts voice replication to pre-approved voices and that newer models are undergoing stringent evaluations to increase robustness against malicious use.