Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label deepfake scams. Show all posts

Crimes Extorting Ransoms by Manipulating Online Photos

 


It is estimated that there are more than 1,000 sophisticated virtual kidnapping scams being perpetrated right now, prompting fresh warnings from the FBI, as criminals are increasingly using facial recognition software to create photos, videos, and sound files designed to fool victims into believing that their loved ones are in immediate danger. 

As a result of increasing difficulty in distinguishing authentic content from digital manipulation, fraudsters are now blending stolen images with hyper-realistic artificial intelligence tools to fabricate convincing evidence of abductions, exploiting the growing difficulty of distinguishing authentic content from digital manipulation in the current era.

It is quite common for victims to be notified via text message that a family member had been kidnapped and that escalating threats demand that an immediate ransom be paid. 

A scammer often delivers what appears to be genuine images of the supposed victim when the victim requests proof, often sent through disappearing messages so that the fake identity cannot be inspected. This evolving approach, according to the FBI, represents a troubling escalation of extortion campaigns, one that takes advantage of panic as well as the blurred line between real and fake identity as it relates to digital identities. 

The FBI has released a public service announcement stating that criminals are increasingly manipulating photos from social media to manufacture convincing "proof-of-life" materials for use in virtual kidnapping schemes based on photos taken from social media and other open sources. As a rule, offenders contact victims by text, claim to have abducted their loved ones, and request an immediate payment while simultaneously using threats of violence as a way to heighten fear. 

It has been reported that scammers often alter photos or generate videos using Artificial Intelligence that appear authentic at first glance, but when compared to verified images of the supposed victim, inconsistencies are revealed—such as missing tattoos, incorrect scars, or distorted facial or body proportions—and thus make the images appear authentic. 

Often, counterfeit materials are sent out through disappearing message features so that careful analysis is limited. As part of the PSA, malicious actors often exploit emotionally charged situations, such as public searches for missing persons, by posing as credible witnesses or supplying fabricated information. Several tips from the FBI have been offered by the FBI to help individuals reduce vulnerability in the event of a cyber incident. 

The FBI advises people to be cautious when posting personal images online, avoid giving sensitive information to strangers, and develop a private verification method - like a family code word - for communication during times of crisis. When faced with ransom demands, the agency advises anyone targeted to do so to remain calm, take a photo or a message of the purported victim, and attempt to contact the purported victim directly before responding to the demand. 

As a result of recent incidents shared by investigators and cybersecurity analysts, it has become increasingly apparent just how convincing it is for criminals to exploit both human emotions and new technological advances to create schemes that blur the line between reality and fiction. 

A Florida woman was defrauded of $15,000 after receiving a phone call from scammers in which the voice of her daughter was cloned by artificial intelligence and asked for help. There was a separate case where parents almost became victims of the same scheme, when they were approached by criminals who impersonated their son and claimed that he was involved in a car accident and needed immediate assistance in order to recover from that situation. 

However, the similarities and differences between these situations reflect a wider pattern: fraud operations are becoming increasingly sophisticated, impersonating the sounds, appearances, and behaviors of loved ones with alarming accuracy, causing families to make hasty decisions under the pressure of fear and confusion, which pushes the victim into making hasty decisions. Experts have stressed that vigilance must go beyond just basic precautions as these tactics evolve. 

There is a recommendation to limit the amount of personal information you share on social media, especially travel plans, identifying information or real-time location updates, and to review your privacy settings to restrict access to trusted contacts. 

In addition, families should be encouraged to establish a private verification word or phrase that will help them verify their identity when in an emergency, and to try to reach out to the alleged victim through a separate device before taking any action at all. There are many ways in which people can minimize our exposure to cybercriminals, including maintaining strong, unique passwords, using reputable password managers, and securing all our devices with reliable security software. 

The authorities emphasize that it is imperative that peopl resist the urgency created by these scams; slowing down, verifying claims, documenting communications and involving law enforcement are crucial steps in preventing financial and emotional harm caused by these scams. 

According to the investigators, even though public awareness of digital threats is on the rise, meaningful security depends on converting that awareness into deliberate, consistent precautions. Despite the fact that it has yet to be widely spread, the investigation notes that the scheme has been around for several years and early reports surfacing in outlets such as The Guardian much before the latest warnings were issued.

Despite the rapid advancement of generative AI tools, experts say that what has changed is that these tactics have become much easier to implement and more convincing, prompting the FBI to re-issue a new alert. As the FBI points out, the fabricated images and videos used in these schemes are rarely flawless, and when one carefully examines them, one can often find evidence that they are manipulated, such as missing tattoos, altered scars, and subtle distortions in the proportions of the body.

A scammer who is aware of these vulnerabilities will often send the material using timed or disappearing message features, so that a victim cannot carefully examine the content before it disappears, making it very difficult for him or her to avoid being duped. 

In this PSA, it is stressed that it is crucial to maintain good digital hygiene to prevent such scams from occurring: limiting personal imagery shared online, being cautious when giving out personal information while traveling, and establishing a private family code word for verifying the identity of a loved one in an emergency. Before considering any financial response, the FBI advises potential targets to take a moment to attempt to speak directly to the supposedly endangered family member. 

In an era when these threats are being constantly tracked by law enforcement and cybersecurity experts, they are cautioning that the responsibility for prevention has increasingly fallen on the public and their proactive habits. 

By strengthening digital literacy—such as learning how to recognize subtle signs of synthetic media, identifying messages that are intended to provoke fear, and maintaining regular communication routines within the family people can provide powerful layers of protection against cybercrime. Moreover, online experts recommend that people diversify their online presence by not using the same profile photograph on every platform they use and by reviewing their social media archives for any old posts that may inadvertently expose personal patterns or personal relationships.

There are many ways in which communities can contribute to cybersafety, including sharing verified information, reporting suspicious events quickly, and encouraging open discussion about online safety among children, parents, and elderly relatives who are often targeted as a result of their trust in technology or lack of familiarity with it. 

Despite the troubling news of the FBI's warning regarding digital extortion, it also suggests that a clear path to reducing the impact and reach of these emotionally exploitative schemes can be found if people remain vigilant, behave thoughtfully online, and keep ourselves aware of our surroundings.

SBI Issues Urgent Warning Against Deepfake Scam Videos Promoting Fake Investment Schemes

 

The State Bank of India (SBI) has issued an urgent public advisory warning customers and the general public about the rising threat of deepfake scam videos. These videos, circulating widely on social media, falsely claim that SBI has launched an AI-powered investment scheme in collaboration with the Government of India and multinational corporations—offering unusually high returns. 

SBI categorically denied any association with such platforms and urged individuals to verify investment-related information through its official website, social media handles, or local branches. The bank emphasized that it does not endorse or support any investment services that promise guaranteed or unrealistic returns. 

In a statement published on its official X (formerly Twitter) account, SBI stated: “State Bank of India cautions all its customers and the general public about many deepfake videos being circulated on social media, falsely claiming the launch of an AI-based platform showcasing lucrative investment schemes supported by SBI in association with the Government of India and some multinational companies. These videos misuse technology to create false narratives and deceive people into making financial commitments in fraudulent schemes. We clarify that SBI does not endorse any such schemes that promise unrealistic or unusually high returns.” 

Deepfake technology, which uses AI to fabricate convincing videos by manipulating facial expressions and voices, has increasingly been used to impersonate public figures and create fake endorsements. These videos often feature what appear to be real speeches or statements by senior officials or celebrities, misleading viewers into believing in illegitimate financial products. This isn’t an isolated incident. Earlier this year, a deepfake video showing India’s Union Finance Minister allegedly promoting an investment platform was debunked by the government’s PIB Fact Check. 

The video, which falsely claimed that viewers could earn a steady daily income through the platform, was confirmed to be digitally altered. SBI’s warning is part of a broader effort to combat the misuse of emerging technologies for financial fraud. The bank is urging everyone to remain cautious and to avoid falling prey to such digital deceptions. Scammers are increasingly using advanced AI tools to exploit public trust and create a false sense of legitimacy. 

To protect themselves, customers are advised to verify any financial information or offers through official SBI channels. Any suspicious activity or misleading promotional material should be reported immediately. SBI’s proactive communication reinforces its commitment to safeguarding customers in an era where financial scams are becoming more sophisticated. The bank’s message is clear: do not trust any claims about investment opportunities unless they come directly from verified, official sources.

Protect Yourself from AI Scams and Deepfake Fraud

 

In today’s tech-driven world, scams have become increasingly sophisticated, fueled by advancements in artificial intelligence (AI) and deepfake technology. Falling victim to these scams can result in severe financial, social, and emotional consequences. Over the past year alone, cybercrime victims have reported average losses of $30,700 per incident. 

As the holiday season approaches, millennials and Gen Z shoppers are particularly vulnerable to scams, including deepfake celebrity endorsements. Research shows that one in five Americans has unknowingly purchased a product promoted through deepfake content, with the number rising to one in three among individuals aged 18-34. 

Sharif Abuadbba, a deepfake expert at CSIRO’s Data61 team, explains how scammers leverage AI to create realistic imitations of influencers. “Deepfakes can manipulate voices, expressions, and even gestures, making it incredibly convincing. Social media platforms amplify the impact as viewers share fake content widely,” Abuadbba states. 

Cybercriminals often target individuals as entry points to larger networks, exploiting relationships with family, friends, or employers. Identity theft can also harm professional reputations and financial credibility. To counter these threats, experts suggest practical steps to protect yourself and your loved ones. Scammers are increasingly impersonating loved ones through texts, calls, or video to request money. 

With AI voice cloning making such impersonations more believable, a pre-agreed safe word can serve as a verification tool. Jamie Rossato, CSIRO’s Chief Information Security Officer, advises, “Never transfer funds unless the person uses your special safe word.” If you receive suspicious calls, particularly from someone claiming to be a bank or official institution, verify their identity. 

Lauren Ferro, a cybersecurity expert, recommends calling the organization directly using its official number. “It’s better to be cautious upfront than to deal with stolen money or reputational damage later,” Ferro adds. Identity theft is the most reported cybercrime, making MFA essential. This adds an extra layer of protection by requiring both a password and a one-time verification code. Experts suggest using app-based authenticators like Microsoft Authenticator for enhanced security. 

Real-time alerts from your banking app can help detect unauthorized transactions. While banks monitor unusual activities, personal notifications allow you to respond immediately to potential scams. The personal information and media you share online can be exploited to create deepfakes. Liming Zhu, a research director at CSIRO, emphasizes the need for caution, particularly with content involving children. 

Awareness remains the most effective defense against scams. Staying informed about emerging threats and adopting proactive security measures can significantly reduce your risk of falling victim to cybercrime. As technology continues to evolve, safeguarding your digital presence is more important than ever. By adopting these expert tips, you can navigate the online world with greater confidence and security.

BMJ Warns: Deepfake Doctors Fueling Health Scams on Social Media

 

Deepfake videos featuring some of Britain's most well-known television doctors are circulating on social media to sell fraudulent products,  as per report by the British Medical Journal (BMJ).

Doctors like Hilary Jones, Rangan Chatterjee, and the late Michael Mosley are being used in these manipulated videos to endorse remedies for various health conditions, as reported by journalist Chris Stokel-Walker.

The videos promote supposed solutions to issues such as high blood pressure and diabetes, often advertising supplements like CBD gummies. "Deepfaking" refers to the use of AI to create a digital likeness of a real person, overlaying their face onto another body, leading to realistic but false videos.

John Cormack, a retired Essex-based doctor, has been working with the BMJ to assess the scope of these fraudulent deepfake videos online. He found that the videos are particularly prevalent on platforms like Facebook. “It's far more cost-effective to invest in video creation than in legitimate research and development,” Cormack said.

Hilary Jones, a general practitioner and TV personality, voiced his concerns over the growing issue of his identity being deepfaked. He employs a specialist to locate and remove these videos, but the problem persists. “Even when we take them down, they reappear almost immediately under different names,” he remarked.

While many deepfakes may appear convincing at first, there are several ways to identify them:
  • Pay attention to small details: AI often struggles with rendering eyes, mouths, hands, and teeth accurately. Misaligned movements or blinking irregularities can be a sign.
  • Look for inconsistencies: Glasses with unnatural glare or facial hair that appears artificial are common red flags, according to experts at MIT.
  • Consider the overall appearance: Poor lighting, awkward posture, or blurred edges are common indicators of deepfake content, according to Norton Antivirus.
  • Verify the source: If the video is from a public figure, ensure it has been posted by a credible source or an official account.
The increase in deepfakes has sparked wider concerns, particularly regarding their use in creating revenge porn and manipulating political elections.

A spokesperson for Meta, the social media giant behind Facebook and Instagram, shared: “We will be investigating the examples highlighted by the British Medical Journal.

"We don’t permit content that intentionally deceives or seeks to defraud others, and we’re constantly working to improve detection and enforcement.

"We encourage anyone who sees content that might violate our policies to report it so we can investigate and take action.”