Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Deepfake. Show all posts

AI Can Create Deepfake Videos of Children Using Just 20 Images, Expert Warns

 

Parents are being urged to rethink how much they share about their children online, as experts warn that criminals can now generate realistic deepfake videos using as few as 20 images. This alarming development highlights the growing risks of digital identity theft and fraud facing children due to oversharing on social media platforms.  

According to Professor Carsten Maple of the University of Warwick and the Alan Turing Institute, modern AI tools can construct highly realistic digital profiles, including 30-second deepfake videos, from a small number of publicly available photos. These images can be used not only by criminal networks to commit identity theft, open fraudulent accounts, or claim government benefits in a child’s name but also by large tech companies to train their algorithms, often without the user’s full awareness or consent. 

New research conducted by Perspectus Global and commissioned by Proton surveyed 2,000 UK parents of children under 16. The findings show that on average, parents upload 63 images to social media every month, with 59% of those being family-related. A significant proportion of parents—21%—share these photos multiple times a week, while 38% post several times a month. These frequent posts not only showcase images but also often contain sensitive data like location tags and key life events, making it easier for bad actors to build a detailed online profile of the child. Professor Maple warned that such oversharing can lead to long-term consequences. 

Aside from potential identity theft, children could face mental distress or reputational harm later in life from having a permanent digital footprint that they never consented to create. The problem is exacerbated by the fact that many parents are unaware of how their data is being used. For instance, 48% of survey respondents did not realize that cloud storage providers can access the data stored on their platforms. In fact, more than half of the surveyed parents (56%) store family images on cloud services such as Google Drive or Apple iCloud. On average, each parent had 185 photos of their children stored digitally—images that may be accessed or analyzed under vaguely worded terms and conditions.  

Recent changes to Instagram’s user agreement, which now allows the platform to use uploaded images to train its AI systems, have further heightened privacy concerns. Additionally, experts have warned about the use of personal images by other Big Tech firms to enhance facial recognition algorithms and advertising models. To protect their children, parents are advised to implement a range of safety measures. These include using secure and private cloud storage, adjusting privacy settings on social platforms, avoiding public Wi-Fi when sharing or uploading data, and staying vigilant against phishing scams. 

Furthermore, experts recommend setting boundaries with children regarding online activity, using parental controls, antivirus tools, and search filters, and modeling responsible digital behavior. The growing accessibility of AI-based image manipulation tools underscores the urgent need for greater awareness and proactive digital hygiene. What may seem like harmless sharing today could expose children to significant risks in the future.

Cybercriminals Employ Fake AI tools to Propagate the Infostealer Noodlophile

 

A new family of malware that steals information, dubbed 'Noodlophile,' is being spread using fake AI-powered video generating tools that pose as generated media content.

The websites are promoted on Facebook groups with a high level of visibility and use catchy names like the "Dream Machine" to make themselves seem like sophisticated artificial intelligence tools that create videos from user files that are uploaded. The latest effort by Morphisec adds a new infostealer to the mix, even though the idea of using AI tools to spread malware is not new and has been used by experienced hackers. 

Morphisec claims that Noodlophile is a new malware-as-a-service enterprise associated with Vietnamese-speaking operators because it is being offered for sale on dark web forums, often in conjunction with "Get Cookie + Pass" services. 

Once the victim visits the malicious website and submits their files, they are given a ZIP folder that is intended to include an artificial intelligence film. Instead, the ZIP includes a fraudulently called application (Video Dream MachineAI.mp4.exe) as well as a hidden folder containing numerous files required for following phases. If a Windows user disables file extensions (which should never be done), the file will appear to be an MP4 video file. 

"The file Video Dream MachineAI.mp4.exe is a 32-bit C++ application signed using a certificate created via Winauth," notes Morphisec."Despite its misleading name (suggesting an .mp4 video), this binary is actually a repurposed version of CapCut, a legitimate video editing tool (version 445.0). This deceptive naming and certificate help it evade user suspicion and some security solutions.”

Double-clicking on the fraudulent MP4 will open a sequence of executables, culminating in the launch of a batch script (Document.docx/install.bat). The script uses the genuine Windows program 'certutil.exe' to decode and extract a base64-encoded password-protected RAR package masquerading as a PDF document. At the same time, it creates a new registry key for persistence.

Subsequently, the script runs'srchost.exe,' which executes an obfuscated Python script (randomuser2025.txt) retrieved from a hardcoded remote server address, ultimately executing the Noodlophile Stealer in memory. If Avast is found on the infected system, PE hollowing is employed to inject the payload into RegAsm.exe. Shellcode injection is used for in-memory execution. 

The best defence against malware is to stay away from files downloaded and run from unidentified websites. Always check file extensions before opening them, and run an antivirus scan on any downloaded files before running them.

SBI Issues Urgent Warning Against Deepfake Scam Videos Promoting Fake Investment Schemes

 

The State Bank of India (SBI) has issued an urgent public advisory warning customers and the general public about the rising threat of deepfake scam videos. These videos, circulating widely on social media, falsely claim that SBI has launched an AI-powered investment scheme in collaboration with the Government of India and multinational corporations—offering unusually high returns. 

SBI categorically denied any association with such platforms and urged individuals to verify investment-related information through its official website, social media handles, or local branches. The bank emphasized that it does not endorse or support any investment services that promise guaranteed or unrealistic returns. 

In a statement published on its official X (formerly Twitter) account, SBI stated: “State Bank of India cautions all its customers and the general public about many deepfake videos being circulated on social media, falsely claiming the launch of an AI-based platform showcasing lucrative investment schemes supported by SBI in association with the Government of India and some multinational companies. These videos misuse technology to create false narratives and deceive people into making financial commitments in fraudulent schemes. We clarify that SBI does not endorse any such schemes that promise unrealistic or unusually high returns.” 

Deepfake technology, which uses AI to fabricate convincing videos by manipulating facial expressions and voices, has increasingly been used to impersonate public figures and create fake endorsements. These videos often feature what appear to be real speeches or statements by senior officials or celebrities, misleading viewers into believing in illegitimate financial products. This isn’t an isolated incident. Earlier this year, a deepfake video showing India’s Union Finance Minister allegedly promoting an investment platform was debunked by the government’s PIB Fact Check. 

The video, which falsely claimed that viewers could earn a steady daily income through the platform, was confirmed to be digitally altered. SBI’s warning is part of a broader effort to combat the misuse of emerging technologies for financial fraud. The bank is urging everyone to remain cautious and to avoid falling prey to such digital deceptions. Scammers are increasingly using advanced AI tools to exploit public trust and create a false sense of legitimacy. 

To protect themselves, customers are advised to verify any financial information or offers through official SBI channels. Any suspicious activity or misleading promotional material should be reported immediately. SBI’s proactive communication reinforces its commitment to safeguarding customers in an era where financial scams are becoming more sophisticated. The bank’s message is clear: do not trust any claims about investment opportunities unless they come directly from verified, official sources.

Fake Candidates, Real Threat: Deepfake Job Applicants Are the New Cybersecurity Challenge

 

When voice authentication firm Pindrop Security advertised an opening for a senior engineering role, one resume caught their attention. The candidate, a Russian developer named Ivan, appeared to be a perfect fit on paper. But during the video interview, something felt off—his facial expressions didn’t quite match his speech. It turned out Ivan wasn’t who he claimed to be.

According to Vijay Balasubramaniyan, CEO and co-founder of Pindrop, Ivan was a fraudster using deepfake software and other generative AI tools in an attempt to secure a job through deception.

“Gen AI has blurred the line between what it is to be human and what it means to be machine,” Balasubramaniyan said. “What we’re seeing is that individuals are using these fake identities and fake faces and fake voices to secure employment, even sometimes going so far as doing a face swap with another individual who shows up for the job.”

While businesses have always had to protect themselves against hackers targeting vulnerabilities, a new kind of threat has emerged: job applicants powered by AI who fake their identities to gain employment. From forged resumes and AI-generated IDs to scripted interview responses, these candidates are part of a fast-growing trend that cybersecurity experts warn is here to stay.

In fact, a Gartner report predicts that by 2028, 1 in 4 job seekers globally will be using some form of AI-generated deception.

The implications for employers are serious. Fraudulent hires can introduce malware, exfiltrate confidential data, or simply draw salaries under false pretenses.

A Growing Cybercrime Strategy

This problem is especially acute in cybersecurity and crypto startups, where remote hiring makes it easier for scammers to operate undetected. Ben Sesser, CEO of BrightHire, noted a massive uptick in these incidents over the past year.

“Humans are generally the weak link in cybersecurity, and the hiring process is an inherently human process with a lot of hand-offs and a lot of different people involved,” Sesser said. “It’s become a weak point that folks are trying to expose.”

This isn’t a problem confined to startups. Earlier this year, the U.S. Department of Justice disclosed that over 300 American companies had unknowingly hired IT workers tied to North Korea. The impersonators used stolen identities, operated via remote networks, and allegedly funneled salaries back to fund the country’s weapons program.

Criminal Networks & AI-Enhanced Resumes

Lili Infante, founder and CEO of Florida-based CAT Labs, says her firm regularly receives applications from suspected North Korean agents.

“Every time we list a job posting, we get 100 North Korean spies applying to it,” Infante said. “When you look at their resumes, they look amazing; they use all the keywords for what we’re looking for.”

To filter out such applicants, CAT Labs relies on ID verification companies like iDenfy, Jumio, and Socure, which specialize in detecting deepfakes and verifying authenticity.

The issue has expanded far beyond North Korea. Experts like Roger Grimes, a longtime computer security consultant, report similar patterns with fake candidates originating from Russia, China, Malaysia, and South Korea.

Ironically, some of these impersonators end up excelling in their roles.

“Sometimes they’ll do the role poorly, and then sometimes they perform it so well that I’ve actually had a few people tell me they were sorry they had to let them go,” Grimes said.

Even KnowBe4, the cybersecurity firm Grimes works with, accidentally hired a deepfake engineer from North Korea who used AI to modify a stock photo and passed through multiple background checks. The deception was uncovered only after suspicious network activity was flagged.

What Lies Ahead

Despite a few high-profile incidents, most hiring teams still aren’t fully aware of the risks posed by deepfake job applicants.

“They’re responsible for talent strategy and other important things, but being on the front lines of security has historically not been one of them,” said BrightHire’s Sesser. “Folks think they’re not experiencing it, but I think it’s probably more likely that they’re just not realizing that it’s going on.”

As deepfake tools become increasingly realistic, experts believe the problem will grow harder to detect. Fortunately, companies like Pindrop are already developing video authentication systems to fight back. It was one such system that ultimately exposed “Ivan X.”

Although Ivan claimed to be in western Ukraine, his IP address revealed he was operating from a Russian military base near North Korea, according to the company.

Pindrop, backed by Andreessen Horowitz and Citi Ventures, originally focused on detecting voice-based fraud. Today, it may be pivoting toward defending video and digital hiring interactions.

“We are no longer able to trust our eyes and ears,” Balasubramaniyan said. “Without technology, you’re worse off than a monkey with a random coin toss.”

AI Impersonations: Revealing the New Frontier of Scamming

 


In the age of rapidly evolving artificial intelligence (AI), a new breed of frauds has emerged, posing enormous risks to companies and their clients. AI-powered impersonations, capable of generating highly realistic voice and visual content, have become a major threat that CISOs must address.

This article explores the multifaceted risks of AI-generated impersonations, including their financial and security impacts. It also provides insights into risk mitigation and a look ahead at combating AI-driven scams.

AI-generated impersonations have ushered in a new era of scam threats. Fraudsters now use AI to create unexpectedly trustworthy audio and visual content, such as vocal cloning and deepfake technology. These enhanced impersonations make it harder for targets to distinguish between genuine and fraudulent content, leaving them vulnerable to various types of fraud.

The rise of AI-generated impersonations has significantly escalated risks for companies and clients in several ways:

  • Enhanced realism: AI tools generate highly realistic audio and visuals, making it difficult to differentiate between authentic and fraudulent content. This increased realism boosts the success rate of scams.
  • Scalability and accessibility: AI-powered impersonation techniques can be automated and scaled, allowing fraudsters to target multiple individuals quickly, expanding their reach and impact.
  • Deepfake threats: AI-driven deepfake technology lets scammers create misleading images or videos, which can destroy reputations, spread fake news, or manipulate video evidence.
  • Voice cloning: AI-enabled voice cloning allows fraudsters to replicate a person’s voice and speech patterns, enabling phone-based impersonations and fraudulent actions by impersonating trusted figures.

Prevention tips: As AI technology evolves, so do the risks of AI-generated impersonations. Organizations need a multifaceted approach to mitigate these threats. Using sophisticated detection systems powered by AI can help identify impersonations, while rigorous employee training and awareness initiatives are essential. CISOs, AI researchers, and industry professionals must collaborate to build proactive defenses against these scams.

UIUC Researchers Expose Security Risks in OpenAI's Voice-Enabled ChatGPT-4o API, Revealing Potential for Financial Scams

 

Researchers recently revealed that OpenAI’s ChatGPT-4o voice API could be exploited by cybercriminals for financial scams, showing some success despite moderate limitations. This discovery has raised concerns about the misuse potential of this advanced language model.

ChatGPT-4o, OpenAI’s latest AI model, offers new capabilities, combining text, voice, and vision processing. These updates are supported by security features aimed at detecting and blocking malicious activity, including unauthorized voice replication.

Voice-based scams have become a significant threat, further exacerbated by deepfake technology and advanced text-to-speech tools. Despite OpenAI’s security measures, researchers from the University of Illinois Urbana-Champaign (UIUC) demonstrated how these protections could still be circumvented, highlighting risks of abuse by cybercriminals.

Researchers Richard Fang, Dylan Bowman, and Daniel Kang emphasized that current AI tools may lack sufficient restrictions to prevent misuse. They pointed out the risk of large-scale scams using automated voice generation, which reduces the need for human effort and keeps operational costs low.

Their study examined a variety of scams, including unauthorized bank transfers, gift card fraud, cryptocurrency theft, and social media credential theft. Using ChatGPT-4o’s voice capabilities, the researchers automated key actions like navigation, data input, two-factor authentication, and following specific scam instructions.

To bypass ChatGPT-4o’s data protection filters, the team used prompt “jailbreaking” techniques, allowing the AI to handle sensitive information. They simulated interactions with ChatGPT-4o by acting as gullible victims, testing the feasibility of different scams on real websites.

By manually verifying each transaction, such as those on Bank of America’s site, they found varying success rates. For example, Gmail credential theft was successful 60% of the time, while crypto-related scams succeeded in about 40% of attempts.

Cost analysis showed that carrying out these scams was relatively inexpensive, with successful cases averaging $0.75. More complex scams, like unauthorized bank transfers, cost around $2.51—still low compared to the potential profits such scams might yield.

OpenAI responded by emphasizing that their upcoming model, o1-preview, includes advanced safeguards to prevent this type of misuse. OpenAI claims that this model significantly outperforms GPT-4o in resisting unsafe content generation and handling adversarial prompts.

OpenAI also highlighted the importance of studies like UIUC’s for enhancing ChatGPT’s defenses. They noted that GPT-4o already restricts voice replication to pre-approved voices and that newer models are undergoing stringent evaluations to increase robustness against malicious use.

UN Report: Telegram joins the expanding cybercrime markets in Southeast Asia

 


According to a report issued by the United Nations Office for Drugs and Crime, dated October 7, criminal networks across Southeast Asia are increasingly turning to the messaging platform Telegram for conducting comprehensive illegal activities. It says Telegram, due to big channels and seemingly insufficient moderation, becomes the attraction of the underworld for organised crime and its resultant transformation in the ways of operating global illicit operations.

An Open Market for Stolen Data and Cybercrime Tools

The UNODC report clearly illustrates how Telegram has become a trading platform for hacked personal data, including credit card numbers, passwords, and browser histories. Cybercriminals publicly trade on the large channels of Telegram with very little interference. In addition, it has various software and tools designed to conduct cybercrime such as fraud using deepfake technology and malware used for copying and collecting users' data. Moreover, money laundering services are provided in unauthorised cryptocurrency exchanges through Telegram.

An example was an ad to be placed on Telegram stating that it was moving USDT cryptocurrency, stolen and with $3 million daily transactions, to cash in on criminal organisations involved in transnational organised crime in Southeast Asia. According to reports, these dark markets are growing increasingly omnipresent on Telegram through which vendors aggressively look to reach criminal organisations in the region.

Southeast Asia: A hub of fraud and exploitation

According to the UNODC reports, this region in Southeast Asia has become an important base for international fraudulent operations. Most criminal activities within the region relate to Chinese syndicates located within heavily fortified locations and use trafficked individuals forced into labour. It is estimated that the industry generates between $27.4 billion and $36.5 billion annually.

The move comes as scrutiny of Telegram and its billionaire founder, Russian-born Pavel Durov, is intensifying. Durov is facing legal fallout in France after he was charged with abetting crime on the platform by allowing the distribution of illegal content after he tightened his regulations in France. The case has sparked debates on the liability of tech companies for the crimes happening on their platform, and the line between free speech and legal accountability.

It responded to the increasing pressure by promising cooperation with legal authorities. The head of Telegram, Durov, stated that Telegram will share the IP addresses and phone numbers of users whenever a legal request for them is required. He further promised to cancel some features on the platform that have been widely misused for illicit activities. Currently, more than a billion people worldwide are using Telegram, and it has so far not reacted publicly to the latest report from the UNODC.

A Perfect Fertile Ground for Cybercrime

For example, as personal data becomes more and more exposed to fraudulent exploitation and fraud schemes through Telegram, for instance, the Deputy Representative for Southeast Asia and the Pacific at UNODC highlighted the perils of the consumer getting to see. In this respect, Benedikt Hofmann, free access and anonymity developed an ideal setting for criminals towards the people's data and safety.

Innovation in Criminal Networks

The growth in Southeast Asia's organised crime to higher levels may indicate criminals will be armed with new, more varying technologies-most importantly malware, generative AI tools, and deepfakes-to commit sophisticated cyber-enabled fraud. In relation to innovation and adaptability, investigation by UNODC revealed over 10 specialised service providers in the region offering deep fakes technology for use in cybercrime cases.

Expanding Investigations Across Asia

Another area of concern discussed in the UNODC report is the increasing investigation by law enforcement agencies in other parts of Asia. For example, South Korean authorities are screening Telegram for its role in the commission of cybercrimes that include deepfake pornography. Meanwhile, in India, a hacker used Telegram chatbots to leak private data from Star Health, one of the country's largest insurers. This incident disclosed medical records, IDs, and even tax details. Star Health sued Telegram.

A Turning Point in Cybersecurity

The UNODC report opens one's eyes to the extent the challenge encrypted messaging presents toward the fight against organised crime. Thus, while criminal groups will continue and take full advantage of platforms like Telegram, tech companies remain on their toes about enforcing control measures over illegal activity while trying to balance concerns to address user privacy and safety.


The Rise of AI: New Cybersecurity Threats and Trends in 2023

 

The rise of artificial intelligence (AI) is becoming a critical trend to monitor, with the potential for malicious actors to exploit the technology as it advances, according to the Cyber Security Agency (CSA) on Tuesday (Jul 30). AI is increasingly used to enhance various aspects of cyberattacks, including social engineering and reconnaissance. 

The CSA’s Singapore Cyber Landscape 2023 report, released on Tuesday, highlights that malicious actors are leveraging generative AI for deepfake scams, bypassing biometric authentication, and identifying vulnerabilities in software. Deepfakes, which use AI techniques to alter or manipulate visual and audio content, have been employed for commercial and political purposes. This year, several Members of Parliament received extortion letters featuring manipulated images, and Senior Minister Lee Hsien Loong warned about deepfake videos misrepresenting his statements on international relations.  

Traditional AI typically performs specific tasks based on predefined data, analyzing and predicting outcomes but not creating new content. This technology can generate new images, videos, and audio, exemplified by ChatGPT, OpenAI’s chatbot. AI has also enabled malicious actors to scale up their operations. The CSA and its partners analyzed phishing emails from 2023, finding that about 13 percent contained AI-generated content, which was grammatically superior and more logically structured. These AI-generated emails aimed to reduce logical gaps and enhance legitimacy by adapting to various tones to exploit a wide range of emotions in victims. 

Additionally, AI has been used to scrape personal identification information from social media profiles and websites, increasing the speed and scale of cyberattacks. The CSA cautioned that malicious actors could misuse legitimate research on generative AI’s negative applications, incorporating these findings into their attacks. The use of generative AI adds a new dimension to cyber threats, making it crucial for individuals and organizations to learn how to detect and respond to such threats. Techniques for identifying deepfakes include evaluating the message, analyzing audio-visual elements, and using authentication tools. 

Despite the growing sophistication of cyberattacks, Singapore saw a 52 percent decline in phishing attempts in 2023 compared to the previous year, contrary to the global trend of rising phishing incidents. However, the number of phishing attempts in 2023 remained 30 percent higher than in 2021. Phishing continues to pose a significant threat, with cybercriminals making their attempts appear more legitimate. In 2023, over a third of phishing attempts used the credible-looking domain “.com” instead of “.xyz,” and more than half of the phishing URLs employed the secure “HTTPS protocol,” a significant increase from 9 percent in 2022. 

The banking and financial services, government, and technology sectors were the most targeted industries in phishing attempts, with 63 percent of the spoofed organizations belonging to the banking and financial services sector. This industry is frequently targeted because it holds sensitive and valuable information, such as personal details and login credentials, which are highly attractive to cybercriminals.