Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Fake Calls. Show all posts

CBI Uncovers Tech Support Scam Targeting Japanese Nationals in Multi-State Operation

 

The Central Bureau of Investigation (CBI) has uncovered a major international scam targeting Japanese citizens through fake tech support schemes. As part of its nationwide anti-cybercrime initiative, Operation Chakra V, the CBI arrested six individuals and shut down two fraudulent call centres operating across Delhi, Haryana, and Uttar Pradesh. 

According to officials, the suspects posed as representatives from Microsoft and Apple to deceive victims into believing their electronic devices were compromised. These cybercriminals manipulated their targets—mainly Japanese nationals—into transferring over ₹1.2 crore (approximately 20.3 million Japanese Yen) under the pretense of resolving non-existent technical issues. 

The investigation, carried out in collaboration with Japan’s National Police Agency and Microsoft, played a key role in tracing the culprits and dismantling their infrastructure. The CBI emphasized that international cooperation was vital in identifying the criminal network and its operations. 

Among those arrested were Ashu Singh from Delhi, Kapil Ghakhar from Panipat, Rohit Maurya from Ayodhya, and three Varanasi residents—Shubham Jaiswal, Vivek Raj, and Adarsh Kumar. These individuals operated two fake customer support centres that mirrored legitimate ones in appearance but were in fact used to run scams. 

The fraud typically began when victims received pop-up messages on their computers claiming a security threat. They were prompted to call a number, which connected them to scammers based in India pretending to be technical support staff. Once in contact, the scammers gained remote access to the victims’ systems, stole sensitive information, and urged them to make payments through bank transfers or by purchasing gift cards. In one severe case, a resident of Hyogo Prefecture lost over JPY 20 million after the attackers converted stolen funds into cryptocurrency. 

Language discrepancies during calls, such as awkward Japanese and audible Hindi in the background, helped authorities trace the origin of the calls. Investigators identified Manmeet Singh Basra of RK Puram and Jiten Harchand of Chhatarpur Enclave as key figures responsible for managing lead generation, financial transfers, and the technical setup behind the fraud. Harchand has reportedly operated numerous Skype accounts used in the scam. 

Between July and December 2024, the operation used 94 malicious Japanese-language URLs, traced to Indian IP addresses, to lure victims with fake alerts. The scheme relied heavily on social engineering tactics and tech deception, making it a highly sophisticated cyber fraud campaign with international implications.

Combatting International Spoofed Calls: India's New Measures to Protect Citizens

 

In recent times, fraudsters have increasingly used international spoofed calls displaying Indian mobile numbers to commit cybercrime and financial fraud. These calls, which appear to originate within India, are actually made by criminals abroad who manipulate the calling line identity (CLI). 

Such spoofed calls have been used in various scams, including fake digital arrests, FedEx frauds, narcotics in courier schemes, and impersonation of government and police officials. To combat this growing threat, the Department of Telecommunications (DoT) and Telecom Service Providers (TSPs) in India have developed a system to identify and block incoming international spoofed calls. 

This initiative aims to prevent such calls from reaching any Indian telecom subscriber. The Ministry of Communications announced that TSPs have been directed to block these calls and are already taking steps to prevent calls with spoofed Indian landline numbers. In addition to this, the DoT has launched the Sanchar Saathi portal, a citizen-centric platform designed to enhance user safety and security amid the rising threat of fraud and international call scams. This portal includes a feature called "Chakshu," which allows individuals to report suspicious calls and messages. 

Chakshu simplifies the process of flagging fraudulent communications, providing an extra layer of protection against cybercriminals. Chakshu serves as a backend repository for citizen-initiated requests on the Sanchar Saathi platform, facilitating real-time intelligence sharing among various stakeholders. The platform also provides information on cases where telecom resources have been misused, helping to coordinate actions among stakeholders. 

Union Minister Ashwini Vaishnaw has highlighted additional measures, including creating a grievance redressal platform for reporting unintended disconnections and a mechanism for returning money frozen due to fraud. These efforts aim to address the concerns of citizens who may have been inadvertently affected by the anti-fraud measures. Since its launch in May last year, the Sanchar Saathi portal has been instrumental in enhancing the security of telecom users. It has helped track or block over 700,000 lost mobile phones and detect more than 6.7 million suspicious communication attempts. 

These efforts underscore the government's commitment to safeguarding citizens from cyber threats and ensuring the integrity of telecom services. The DoT and TSPs' proactive measures, along with the Sanchar Saathi portal, represent significant steps towards protecting Indian citizens from international spoofed calls and other forms of cybercrime. By leveraging advanced technology and fostering collaboration among stakeholders, these initiatives aim to create a safer digital environment for all.

AI-Based Deepfake Fraud: Police Retrieves Money Worth ₹40,000 Defrauded From Kozhikode Victim


Kozhikode, India: In a ‘deepfake’ incident, a man from Kozhikode, Kerala lost ₹40,000 after he fell prey to an AI-based scam.

According to police officials, the victim, identified as Radhakrishnan received a video call on WhatsApp from an unknown number. Apparently, the swindlers used Artificial Intelligence tools to generate a deepfake video of the victim’s old colleague knew. To further maintain the trust, the scam caller cunningly mentioned the victim’s former acquaintances.

During their conversation, the scammer made a desperate request of ₹40,000, stating a medical urgency of a relative who is in the hospital. Trusting the caller, Radhakrishnan provided the financial aid, via Google Pay.

Later, the caller made another request to Radhakrishnan, of ₹40,000, which raised his suspicions. Following this, he reached out to his colleague directly. To his disbelief, he discovered the entire incident was in fact an AI based deepfake fraud, and he was robbed./ Realizing the fraud, he immediately filed a complaint to the Cyber Police.

The cyber cell promptly investigated the case and managed to the bank authorities of the bank account where the money was kept. Apparently, the bank account was traced back to private bank located in Maharashtra.

This was the first incidence of deepfake fraud based on Al that has been detected in the state, according to the Kerala Police Cyber Cell.

Modus Operandi: The scammers collect images from social media profiles and use artificial intelligence to create misleading films. These con artists use Al technology in conjunction with details like mutual friends' names to appear legitimate and con innocent individuals.

How to Protect Oneself From Deepfakes? 

Similar cases of deepfakes and other AI-based frauds have raised concerns for cyber security professionals.

Experts have cautioned against such scams and provided some safety advice. Because the majority of deepfakes have subpar resolution, people are urged to examine the video quality. When closely examined, it is obvious that the deepfake films are fake since they either abruptly end or loop back to the beginning after a predetermined length of time. Before conducting any financial transactions, it is also a good idea to get in touch with a person separately to confirm that they are truly participating in the video conversation.