Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI Misuse. Show all posts

Deepfake Video of Sadhguru Used to Defraud Bengaluru Woman of Rs 3.75 Crore


 

As a striking example of how emerging technologies are used as weapons for deception, a Bengaluru-based woman of 57 was deceived out of Rs 3.75 crore by an AI-generated deepfake video supposedly showing the spiritual leader Sadhguru. The video was reportedly generated by an AI-driven machine learning algorithm, which led to her loss of Rs 3.75 crore. 

During the interview, the woman, identifying herself as Varsha Gupta from CV Raman Nagar, said she did not know that deepfakes existed when she saw a social media reel that appeared to show Sadhguru promoting investments in stocks through a trading platform, encouraging viewers to start with as little as $250. She had no idea what deepfakes were when she saw the reel. 

The video and subsequent interactions convinced her of its authenticity, which led to her investing heavily over the period of February to April, only to discover later that she had been deceived by the video and subsequent interactions. During that time, it has been noted that multiple fake advertisements involving artificial intelligence-generated voices and images of Sadhguru were circulating on the internet, leading police to confirm the case and launch an investigation. 

It is important to note that the incident not only emphasises the escalation of financial risk resulting from deepfake technology, but also the growing ethical and legal issues associated with it, as Sadhguru had recently filed a petition with the Delhi High Court to protect his rights against unauthorised artificial intelligence-generated content that may harm his persona. 

Varsha was immediately contacted by an individual who claimed to be Waleed B, who claimed to be an agent of Mirrox, and who identified himself as Waleed B. In order to tutor her, he used multiple UK phone numbers to add her to a WhatsApp group that had close to 100 members, as well as setting up trading tutorials over Zoom. After Waleed withdrew, another man named Michael C took over as her trainer when Waleed later withdrew. 

Using fake profit screenshots and credit information within a trading application, the fraudsters allegedly constructed credibility by convincing her to make repeated transfers into their bank accounts, in an effort to gain her trust. Throughout the period February to April, she invested more than Rs 3.75 crore in a number of transactions. 

 After she declined to withdraw what she believed to be her returns, everything ceased abruptly after she was informed that additional fees and taxes would be due. When she refused, things escalated. Despite the fact that the investigation has begun, investigators are partnering with banks to freeze accounts linked to the scam, but recovery remains uncertain since the complaint was filed nearly five months after the last transfer, when it was initially filed. 

Under the Bharatiya Nyaya Sanhita as well as Section 318(4) of the Information Technology Act, the case has been filed. Meanwhile, Sadhguru Jaggi Vasudev and the Isha Foundation formally filed a petition in June with the Delhi High Court asking the court to provide him with safeguards against misappropriation of his name and identity by deepfake content publishers. 

Moreover, the Foundation issued a public advisory regarding social media platform X, warning about scams that were being perpetrated using manipulated videos and cloned voices of Sadhguru, while reaffirming that he is not and will not endorse any financial schemes or commercial products. It was also part of the elaborate scheme in which Varsha was added to a WhatsApp group containing almost one hundred members and invited to a Zoom tutorial regarding online trading. 

It is suspected that the organisers of these sessions - who later became known as fraudsters - projected screenshots of profits and staged discussions aimed at motivating participants to act as positive leaders. In addition to the apparent success stories, she felt reassured by what seemed like a legitimate platform, so she transferred a total of 3.75 crore in several instalments across different bank accounts as a result of her confidence in the platform. 

Despite everything, however, the illusion collapsed when she attempted to withdraw her supposed earnings from her account. A new demand was made by the scammers for payment of tax and processing charges, but she refused to pay it, and when she did, all communication was abruptly cut off. It has been confirmed by police officials that her complaint was filed almost five months after the last transaction, resulting in a delay which has made it more challenging to recover the funds, even though efforts are currently being made to freeze the accounts involved in the scam. 

It was also noted that the incident occurred during a period when concern over artificial intelligence-driven fraud is on the rise, with deepfake technology increasingly being used to enhance the credibility of such schemes, authorities noted. In April of this year, Sadhguru Jaggi Vasudev and the Isha Foundation argued that the Delhi High Court should be able to protect them from being manipulated against their likeness and voice in deepfake videos. 

In a public advisory issued by the Foundation, Sadhguru was advised to citizens not to promote financial schemes or commercial products, and to warn them against becoming victims of fraudulent marketing campaigns circulating on social media platforms. Considering that artificial intelligence is increasingly being used for malicious purposes in this age, there is a growing need for greater digital literacy and vigilance in the digital age. 

Despite the fact that law enforcement agencies are continuing to strengthen their cybercrime units, the first line of defence continues to be at the individual level. Experts suggest that citizens exercise caution when receiving unsolicited financial offers, especially those appearing on social media platforms or messaging applications. It can be highly effective to conduct independent verification through official channels, maintain multi-factor authentication on sensitive accounts, and avoid clicking on suspicious links on an impulsive basis to reduce exposure to such traps. 

Financial institutions and banks should be equally encouraged to implement advanced artificial intelligence-based monitoring systems that can detect irregular patterns of transactions and identify fraudulent networks before they cause significant losses. Aside from technology, there must also be consistent public awareness campaigns and stricter regulations governing digital platforms that display misleading advertisements. 

It is now crucial that individuals keep an eye out for emerging threats such as deepfakes in order to protect their personal wealth and trust from these threats. Due to the sophistication of fraudsters, as demonstrated in this case, it is becoming increasingly difficult to protect oneself in this digital era without a combination of diligence, education, and more robust systemic safeguards.

Contractor Uses AI to Fake Road Work, Sparks Outrage and Demands for Stricter Regulation

 

In  a time when tools like ChatGPT are transforming education, content creation, and research, an Indian contractor has reportedly exploited artificial intelligence for a far less noble purpose—fabricating roadwork completion using AI-generated images.

A video that recently went viral on Instagram has exposed the alleged misuse. In it, a local contractor is seen photographing an unconstructed, damaged road and uploading the image to an AI image generator. He then reportedly instructed the tool to recreate the image as a finished cement concrete (CC) road—complete with clean white markings, smooth edges, and a drainage system.

In moments, the AI delivered a convincing “after” image. The contractor is believed to have sent this fabricated version to a government engineer on WhatsApp, captioning it: “Road completed.” According to reports, the engineer approved the bill without any physical inspection of the site.

While the incident has drawn laughter for its ingenuity, it also shines a spotlight on a serious lapse in administrative verification. Civil projects traditionally require on-site evaluation before funds are cleared. But with government departments increasingly relying on digital updates and WhatsApp for communication, such loopholes are becoming easier to exploit.

Though ChatGPT doesn’t create images, it is suspected that the contractor used AI tools like Midjourney or DALL·E, possibly combined with ChatGPT-generated prompts to craft the manipulated photo. As one Twitter user put it, “This is not just digital fraud—it’s a governance loophole. Earlier, work wasn’t done, and bills got passed with a signature. Now, it’s ‘make it with AI, send it, and the money comes in.’”

The clip, shared by Instagram user “naughtyworld,” has quickly racked up millions of views. While some viewers praised the tech-savviness, others expressed alarm at the implications.

“This is just the beginning. AI can now be used to deceive the government itself,” one user warned. Another added, “Forget smart cities. This is smart corruption.”

The incident has fueled widespread calls on social media for stronger regulation of AI use, more transparent public work verification processes, and a legal probe into the matter. Experts caution that if left unchecked, this could open the door to more sophisticated forms of digital fraud in governance.

AI In Wrong Hands: The Underground Demand for Malicious LLMs

AI In Wrong Hands: The Underground Demand for Malicious LLMs

In recent times, Artificial Intelligence (AI) has offered various perks across industries. But, as with any powerful tool, threat actors are trying to use it for malicious reasons. Researchers suggest that the underground market for illicit large language models is enticing, highlighting a need for strong safety measures against AI misuse. 

These underground markets that deal with malicious large language models (LLMs) are called Mallas. This blog dives into the details of this dark industry and discusses the impact of these illicit LLMs on cybersecurity. 

The Rise of Malicious LLMs

LLMs, like OpenAI' GPT-4 have shown fine results in natural language processing, bringing applications like chatbots for content generation. However, the same tech that supports these useful apps can be misused for suspicious activities. 

Recently, researchers from Indian University Bloomington found 212 malicious LLMs on underground marketplaces between April and September last year. One of the models "WormGPT" made around $28,000 in just two months, revealing a trend among threat actors misusing AI and a rising demand for these harmful tools. 

How Uncensored Models Operate 

Various LLMs in the market were uncensored and built using open-source standards, few were jailbroken commercial models. Threat actors used Mallas to write phishing emails, build malware, and exploit zero days. 

Tech giants working in the AI models industry have built measures to protect against jailbreaking and detecting malicious attempts. But threat actors have also found ways to jump the guardrails and trick AI models like Google Meta, OpenAI, and Anthropic into providing malicious info. 

Underground Market for LLMs

Experts found two uncensored LLMs: DarkGPT, which costs 78 cents per 50 messages, and Escape GPT, a subscription model that charges $64.98 a month. Both models generate harmful code that antivirus tools fail to detect two-thirds of the time. Another model "WolfGPT" costs $150, and allows users to write phishing emails that can escape most spam detectors. 

The research findings suggest all harmful AI models could make malware, and 41.5% could create phishing emails. These models were built upon OpenAI's GPT-3.5 and GPT-4, Claude Instant, Claude-2-100k, and Pygmalion 13B. 

To fight these threats, experts have suggested a dataset of prompts used to make malware and escape safety features. AI companies should release models with default censorship settings and allow access to illicit models only for research purposes.