Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Mobile Security. Show all posts

OpenAI's Sora App Raises Facial Data Privacy Concerns

 

OpenAI's video-generating app, Sora, has raised significant questions regarding the safety and privacy of user's biometric data, particularly with its "Cameo" feature that creates realistic AI videos, or "deepfakes," using a person's face and voice. 

To power this functionality, OpenAI confirms it must store users' facial and audio data. The company states this sensitive data is encrypted during both storage and transmission, and uploaded cameo data is automatically deleted after 30 days. Despite these assurances, privacy concerns remain. The app's ability to generate hyper-realistic videos has sparked fears about the potential for misuse, such as the creation of unauthorized deepfakes or the spread of misinformation. 

OpenAI acknowledges a slight risk that the app could produce inappropriate content, including sexual deepfakes, despite the safeguards in place. In response to these risks, the company has implemented measures to distinguish AI-generated content, including visible watermarks and invisible C2PA metadata in every video created with Sora .

The company emphasizes that users have control over their likeness. Individuals can decide who is permitted to use their cameo and can revoke access or delete any video featuring them at any time. However, a major point of contention is the app's account deletion policy. Deleting a Sora account also results in the termination of the user's entire OpenAI account, including ChatGPT access, and the user cannot register again with the same email or phone number. 

While OpenAI has stated it is developing a way for users to delete their Sora account independently, this integrated deletion policy has surprised and concerned many users who wish to remove their biometric data from Sora without losing access to other OpenAI services.

The app has also drawn attention for potential copyright violations, with users creating videos featuring well-known characters from popular media. While OpenAI provides a mechanism for rights holders to request the removal of their content, the platform's design has positioned it as a new frontier for intellectual property disputes.

Critical WhatsApp Zero Click Vulnerability Abused with DNG Payload

 


It has been reported that attackers are actively exploiting a recently discovered vulnerability in WhatsApp's iOS application as a part of a sophisticated cyber campaign that underscores how zero-day vulnerabilities are becoming weaponised in today's cyber warfare. With the zero-click exploit identified as CVE-2025-55177 with a CVSS score of 5.4, malicious actors can execute unauthorised content processing based on any URL on a victim's device without the need for user interaction whatsoever. 

A vulnerability referred to as CVE-2025-55177 provides threat actors with a way to manipulate WhatsApp's synchronization process, so they may force WhatsApp to process attacker-controlled content during device linking when they manipulate the WhatsApp synchronization process. 

Even though the vulnerability could have allowed crafted content to be injected or disrupted services, its real danger arose when it was combined with Apple's CVE-2025-43300, another security flaw that affects the ImageIO framework, which parses image files. In addition to this, there were also two other vulnerabilities in iOS and Mac OS that allowed out-of-bounds memory writing, which resulted in remote code execution across these systems. 

The combination of these weaknesses created a very powerful exploit chain that could deliver malicious images through the incoming message of a WhatsApp message, causing infection without the victim ever having to click, tap or interact with anything at all—a quintessential zero-click attack scenario. Investigators found that the targeting of the victims was intentional and highly selective. 

In the past, WhatsApp has confirmed that it has notified fewer than 200 people about potential threats in its apps, a number that is similar to earlier mercenary spyware operations targeting high-value users. Apple has also acknowledged active exploitation in the wild and has issued security advisories concurrently. 

Researchers from Amnesty International noted that, despite initial signs suggesting limited probing of Android devices, this campaign was mainly concerned with Apple's iOS and macOS ecosystems, and therefore was focused on those two ecosystems mainly. The implications are particularly severe for businesses.

Corporate executives, legal teams, and employees with privileged access to confidential intellectual property are at risk of being spied on or exfiltrated through using WhatsApp on their work devices, which represents a direct and potentially invisible entry point into corporate data systems. 

Cybersecurity and Infrastructure Security Agency (CISA) officials say that the vulnerability was caused by an "incomplete authorisation of linked device synchronisation messages" that existed in WhatsApp for iOS versions before version 2.25.2.173, WhatsApp Business for iOS versions of 2.25.1.78, and WhatsApp for Mac versions of 2.25.21.78. 

This flaw is believed to have been exploited by researchers as part of a complex exploit chain, which was created using the flaw in conjunction with a previously patched iOS vulnerability known as CVE-2025-43300, allowing for the delivery of spyware onto targeted devices. A U.S. government advisory has been issued urging federal employees to update their Apple devices immediately because the campaign has reportedly affected approximately 200 people. 

A new discovery adds to the growing body of evidence that advanced cyber threat actors increasingly rely on chaining multiple zero-day exploits to circumvent hardened defences and compromise remote devices. In 2024, Google's Threat Analysis Group reported 75 zero-day exploits that were actively exploited, a figure that reflects how the scale of these attacks is accelerating. 

This stealthy intrusion method continues to dominate as the year 2025 unfolds, resulting in nearly one-third of all recorded compromise attempts worldwide occurring this year. It is important for cybersecurity experts to remind us that the WhatsApp incident demonstrates once more the fragility of digital trust, even when it comes to encrypting platforms once considered to be secure. 

It has been uncovered that the attackers exploited a subtle logic flaw in WhatsApp’s device-linking system, allowing them to disguise malicious content to appear as if it was originating from the user’s own paired device, according to a technical analysis.

Through this vulnerability, a specially crafted Digital Negative (DNG) file could be delivered, which, once processed automatically by the application, could cause a series of memory corruption events that would result in remote code execution. Researchers at DarkNavyOrg have demonstrated the proof-of-concept in its fullest sense, showing how an automated script is capable of authenticating, generating the malicious DNG payload, and sending it to the intended victim without triggering any security alerts. 

In order to take advantage of the exploit, there are no visible warnings, notification pop-ups, or message notifications displayed on the user's screen. This allows attackers to gain access to messages, media, microphones, and cameras unrestrictedly, and even install spyware undetected. It has been reported to WhatsApp and Apple that the vulnerability has been found, and patches have been released to mitigate the risks. 

Despite this, security experts recommend that users install the latest updates immediately and be cautious when using unsolicited media files—even those seemingly sent by trusted contacts. In the meantime, organisations should ensure that endpoint monitoring is strengthened, that mobile device management controls are enforced, and that anomalous messaging behaviour is closely tracked until the remediation has been completed. 

There is a clear need for robust input validation, secure file handling protocols, and timely security updates to prevent silent but highly destructive attacks targeting mainstream communication platforms that can be carried out against mainstream communication platforms due to the incident. Cyber adversaries have, for a long time, been targeting companies such as WhatsApp, and WhatsApp is no exception. 

It is noteworthy that despite the platform's strong security framework and end-to-end encryption, threat actors are still hunting for new vulnerabilities to exploit. Although there are several different cyberattack types, security experts emphasise that zero-click exploits remain the most insidious, since they can compromise devices without the user having to do anything. 

V4WEB Cybersecurity founder, Riteh Bhatia, made an explanation for V4WEB's recent WhatsApp advisory, explaining that it pertains to one of these zero-click exploits--a method of attacking that does not require a victim to click, download, or applaud during the attack. Bhatia explained that, unlike phishing, where a user is required to click on a malicious link, zero-click attacks operate silently, working in the background. 

According to Bhatia, the attackers used a vulnerability in WhatsApp as well as a vulnerability in Apple's iOS to hack into targeted devices through a chain of vulnerabilities. He explained to Entrepreneur India that this process is known as chaining vulnerabilities. 

Chaining vulnerabilities allows one weakness to provide entry while the other provides control of the system as a whole. Further, Bharatia stressed that spyware deployed by these methods is capable of doing a wide range of invasive functions, such as reading messages, listening through the microphone, tracking location, and accessing the camera in real time, in addition to other invasive actions. 

As a warning sign, users might notice excessive battery drain, overheating, unusual data usage, or unexpected system crashes, all of which may indicate that the user's system is not performing optimally. Likewise, Anirudh Batra, a senior security researcher at CloudSEK, stated that zero-click vulnerabilities represent the "holy grail" for hackers, as they can be exploited seamlessly even on fully updated and ostensibly secure devices without any intervention from the target, and no action is necessary on their part.

If this vulnerability is exploited effectively, attackers will be able to have full control over the targeted devices, which will allow them to access sensitive data, monitor communications, and deploy additional malware, all without the appearance of any ill effect. As a result of this incident, it emphasises that security risks associated with complex file formats and cross-platform messaging apps persist, since flaws in file parsers continue to serve as common pathways for remote code execution.

There is a continuing investigation going on by DarkNavyOrg, including one looking into a Samsung vulnerability (CVE-2025-21043), which has been identified as a potential security concern. There was a warning from both WhatsApp and Apple that users should update their operating systems and applications immediately, and Meta confirmed that less than 200 users were notified of in-app threats. 

It has been reported that some journalists, activists, and other public figures have been targeted. Meta's spokesperson Emily Westcott stressed how important it is for users to keep their devices current and to enable WhatsApp's privacy and security features. Furthermore, Amnesty International has also noted possible Android infections and is currently conducting further investigation. 

In the past, similar spyware operations occurred, such as WhatsApp's lawsuit against Israel's NSO Group in 2019, which allegedly targeted 1,400 users with the Pegasus spyware, which later became famous for its role in global cyberespionage. While sanctions and international scrutiny have been applied to such surveillance operations, they continue to evolve, reflecting the persistent threat that advanced mobile exploits continue to pose. 

There is no doubt that the latest revelations are highlighting the need for individuals and organisations to prioritise proactive cyber security measures rather than reactive ones, as zero-click exploits are becoming more sophisticated, the traditional boundaries of digital security—once relying solely on the caution of users—are eroding rapidly. It has become increasingly important for organisations to keep constant vigilance, update their software quickly, and employ layered defence strategies to protect both their personal and business information. 

Organisations need to invest in threat intelligence solutions, continuous monitoring systems, and regular mobile security audits if they want to be on the lookout for potential threats early on. In order for individual users to reduce their exposure, they need to maintain the latest version of their devices and applications, enable built-in privacy protections, and avoid unnecessary third-party integrations. 

The WhatsApp exploit is an important reminder that even trusted, encrypted platforms may be compromised at some point. The cyber espionage industry is evolving into a silent and targeted operation, and digital trust must be reinforced through transparent processes, rapid patching, and global cooperation between tech companies and regulators. A strong defence against invisible intrusions still resides in awareness and timely action.

Call-Recording App Neon Suspends Service After Security Breach

 

Neon, a viral app that pays users to record their phone calls—intending to sell these recordings to AI companies for training data—has been abruptly taken offline after a severe security flaw exposed users’ personal data, call recordings, and transcripts to the public.

Neon’s business model hinged on inviting users to record their calls through a proprietary interface, with payouts of 30 cents per minute for calls between Neon users and half that for calls to non-users, up to $30 per day. The company claimed it anonymized calls by stripping out personally identifiable information before selling the recordings to “trusted AI firms,” but this privacy commitment was quickly overshadowed by a crippling security lapse.

Within a day of rising to the top ranks of the App Store—boasting 75,000 downloads in a single day—the app was taken down after researchers discovered a vulnerability that allowed anyone to access other users’ call recordings, transcripts, phone numbers, and call metadata. Journalists found that the app’s backend was leaking not only public URLs to call audio files and transcripts but also details about recent calls, including call duration, participant phone numbers, timing, and even user earnings.

Alarmingly, these links were unrestricted—meaning anyone with the URL could eavesdrop on conversations—raising immediate privacy and legal concerns, especially given complex consent laws around call recording in various jurisdictions.

Founder and CEO Alex Kiam notified users that Neon was being temporarily suspended and promised to “add extra layers of security,” but did not directly acknowledge the security breach or its scale. The app itself remains visible in app stores but is nonfunctional, with no public timeline for its return. If Neon relaunches, it will face intense scrutiny over whether it has genuinely addressed the security and privacy issues that forced its shutdown.

This incident underscores the broader risks of apps monetizing sensitive user data—especially voice conversations—in exchange for quick rewards, a model that has emerged as AI firms seek vast, real-world datasets for training models. Neon’s downfall also highlights the challenges app stores face in screening for complex privacy and security flaws, even among fast-growing, high-profile apps.

For users, the episode is a stark reminder to scrutinize privacy policies and app permissions, especially when participating in novel data-for-cash business models. For the tech industry, it raises questions about the adequacy of existing safeguards for apps handling sensitive audio and personal data—and about the responsibilities of platform operators to prevent such breaches before they occur.

As of early October 2025, Neon remains offline, with users awaiting promised payouts and a potential return of the service, but with little transparency about how (or whether) the app’s fundamental security shortcomings have been fixed.

Google plans shift to risk-based security updates for Android phones


 

The Google Android ecosystem is set to undergo a significant transformation in its security posture, with Google preparing to overhaul the method it utilizes to address software vulnerabilities. Google is aiming to strengthen this. 

According to reports by Android Authority, the company plans to develop a new framework known as the Risk-Based Update System (RBUS) which will streamline patching processes for device manufacturers and help end users receive faster protection. According to Google, at present, Android Security Bulletins (ASBs) are published every month, which contain fixes for a variety of vulnerabilities, from minor flaws to severe exploits. 

A notification of hardware partners and Original Equipment Manufacturers (OEMs) is given at least one month in advance. Updates, however, will no longer be bundled together indiscriminately under the new approach. Google intends, instead, to prioritize real-world threats. 

As part of this initiative, Google will ensure vulnerabilities that are actively exploited or that pose the greatest risk to user privacy and data security are patched at the earliest possible opportunity. There will be no longer any delays in the release of essential protections due to less critical issues like low-level denial-of-service bugs. 

If this initiative is fully implemented, not only will OEMs be relieved from the burden of updating their devices, but it also shows Google's commitment to ensuring the safety of Android users by creating an intelligent and responsive update cycle. 

Over the last decade, Google has maintained a consistent rhythm with publishing the Android Security Bulletins on a monthly basis, regardless of whether or not updates for its Pixel devices had yet been released. There has been a tradition for each bulletin to outline a wide range of vulnerabilities, ranging from relatively minor issues to critical ones, with the sheer complexity of Android often leading to a dozen or more vulnerabilities being reported every month as a result of its sheer complexity. 

In July 2025, however, Google disrupted this cadence by publishing an update for the first time in 120 consecutive bulletins that did not document a single vulnerability for the first time. A break in precedent did not mean there were no issues, rather it signaled that Google was shifting how they communicate and distribute security updates in a strategic manner. 

In September 2025, the bulletin recorded an unusually high number of 119 vulnerabilities, underscoring the change in how they communicate and distribute security fixes. According to this contrast, Google has taken steps toward prioritizing high-risk vulnerabilities and ensuring that the device manufacturers are able to respond to emerging threats as quickly as possible, so that users can be shielded from active exploit. 

In spite of the fact that Original Equipment Manufacturers (OEMs) are largely dependent on the Android operating system to power their devices, they frequently operate on separate patch cycles and publish individual security bulletins, which has historically led to a degree of inconsistency across all ecosystems. 

With Google's aim to streamline the number of fixes the manufacturer must deploy each month, it appears Google wants to alleviate the burden on manufacturers, reducing the amount of patches that must be tested and deployed, as well as giving OEMs greater flexibility when and how firmware updates should be rolled out. 

It is possible for device makers to gain a greater sense of control by prioritizing high-risk vulnerabilities, but it also raises concern about possible delays in addressing less severe vulnerabilities that could be exploited if left uncorrected. The larger quarterly bulletins will be able to offset this new cadence. 

The September 2025 bulletin, which included more than 100 vulnerabilities in comparison to the empty or minimal lists of July and August, is indicative of this. According to Google spokesperson, in a statement to ZDNET, Android and Pixel both continuously address known security vulnerabilities, putting an emphasis on the most vulnerable to be fixed. 

In this way, Google emphasizes the platform's hardened protections, such as the adoption of memory-safe programming languages like Rust and the use of advanced anti-exploitation measures built into the platform. It is also being announced that Google will be extending its security posture beyond its system updates. 

Starting next year, developers of Android-certified apps will be required to provide their identities in order to distribute their software, as well as restrictions on sideloading, which are designed to combat fraudulent and malicious app development. There will also be increased pressure on major Android partners, such as Samsung, OnePlus, and other Original Equipment Manufacturers (OEMs) to adjust their update pipelines as a result of the switch to a risk-based update framework. 

According to Android Authority, which was the first to report about Google's plans, Google is actively negotiating with partners in an attempt to ease this shift, potentially reducing the burden on manufacturers who have historically struggled to provide timely updates. Sources cited by the company indicate that the company is actively in discussions with partners in order to ease this transition. 

The model offers users a more robust level of protection against active threats as well as minimizing interruptions from less urgent fixes, which will lead to a better device experience for users. Nevertheless, Google's approach raises some questions about transparency, including how it will determine what constitutes a high-risk flaw, and how it will communicate those judgments in a transparent manner. 

There are critics who warn against the risks of deprioritizing lower-severity vulnerabilities, which, while effective short-term, risks leaving cumulative holes in long-term device security. According to Google’s strategy, outlined in Android Headlines, which was designed to counter mobile exploits with data-driven strategies that aim to outpace attackers who are increasingly targeting smartphones, Google's strategy is a data-driven response. 

There are implications for more than Android phones. It is possible that the decision could be used as a model for rival operating systems, especially as regulators in regions like the European Union push for more consistent and timely patches for consumer devices. Consequently, enterprises and developers need to rethink how patch management works, and OEMs that adopt patch management early may be able to gain an advantage in markets that are sensitive to security. 

Despite a streamlined schedule, smaller manufacturers may be unable to keep up with the pace, underscoring the fragmentation that has long plagued the Android ecosystem. In an effort to mitigate these risks, Google has already signaled plans for providing tools and guidelines, and some industry observers are speculating that future Android versions might even include AI-powered predictive security tools that identify and prevent threats before they occur. 

With the successful implementation of this initiative, a new era of mobile security standards might be dawning and a balance between urgency and efficiency would be established in an era where cyber-attacks are escalating. For the average Android user, it is expected that the practical impact of Google's risk-based approach will be overwhelmingly positive. 

A device owner who receives a monthly patch may not notice much change, but a device owner with a handset that isn't updated regularly will benefit from manufacturers being able to push out fixes in a more structured fashion—particularly quarterly bulletins, which are now responsible for the bulk of security updates. 

There are, however, critics who caution that the consolidation of patches on a quarterly basis could, in theory, create an opportunity for malicious actors to exploit if details of upcoming fixes were leaked. However, industry analysts caution that this is still a very hypothetical risk, as the system is designed to accelerate the vulnerability discovery process in order to make sure that the most dangerous vulnerabilities are quickly exploited before they are widely abused. 

In the aggregate, the strategy demonstrates that Google is taking steps to enhance Android's defenses by prioritizing urgent threats, which aims to improve Android's security and stability across its wide range of devices in order to deliver a more reliable and stable experience for its users. 

Ultimately, the success of Google's risk-based update strategy will be determined not only by how quickly vulnerabilities are identified and patched, but also by how well manufacturers, regulators, and a broader developer community cooperate with Google. Since the Android ecosystem remains among the most fragmented, diverse, and diverse in the world, the effectiveness of this model will ultimately be evaluated based on the consistency and timeliness with which it provides protection across billions of devices, from flagship smartphones to budget models in emerging markets, within a timely manner. 

There are a number of questions that users need to keep in mind in order to get the most out of security: Enabling automatic updates, limiting the use of sideloaded applications, and choosing devices from OEMs that are known for providing timely patches are all ways to make sure users are protected.

The framework offers enterprises a chance to re-calibrate their device management policies, emphasizing risk management and aligning them with quarterly cycles more than ever before. As a result of Google's move, security will become much more than a static checklist. 

Instead, it will become an adaptive, dynamic process that anticipates threats rather than simply responds to them. Obviously, if this approach is executed effectively, it is going to change the landscape in terms of mobile security around the world, turning Android's vast reach from a vulnerability into one of its greatest assets.

RBI Proposes Smartphone Lock Mechanism for EMI Defaults

 

RBI is considering allowing lenders to remotely lock smartphones purchased on credit when borrowers default on EMIs, aiming to curb bad debt while igniting concerns over consumer rights and digital access harms . 

What’s proposed 

Reuters reporting indicates RBI may amend its Fair Practices Code to explicitly permit device locking for loan recovery on financed phones, reversing a 2023 direction that told lenders to stop using locking apps on defaulters’ devices. 

The proposed framework would mandate explicit borrower consent before any locking mechanism is enabled, and expressly prohibit lenders from accessing or altering personal data on the device, positioning the tool as a narrow recovery control rather than a surveillance vector. A cited source frames the intent as balancing “power to recover small-ticket loans” with protection of customer data under the updated ruleset . 

Why now 

The move sits against a surge in credit-funded electronics purchases, especially smartphones, with a 2024 Home Credit Finance study estimating over one-third of electronic goods in India are bought on credit, underscoring the scale of exposure across 1.16 billion mobile connections in a 1.4 billion population market. 

Delinquencies are pronounced on loans under Rs 1 lakh, per CRIF Highmark data, with non-bank finance companies responsible for nearly 85% of consumer durable lending, indicating systemic sensitivity to recovery tools in this segment. 

Potential impact on lenders 

If adopted, the policy could materially improve recovery rates and risk appetite for large NBFCs like Bajaj Finance, DMI Finance, and Cholamandalam Finance, potentially expanding credit access to borrowers with weaker scores given a stronger collateral-like enforcement mechanism on the financed handset itself. 

This could reshape underwriting models for small-ticket device finance by lowering expected loss estimates tied to first-payment default and early delinquency cohorts. 

Consumer rights concerns 

Critics warn of serious unintended effects: remote locking could “weaponise” access to essential technology, coercing behavioral compliance by cutting off lifeline services—work, education, payments—until repayment is made, with disproportionate harm to low-income and digitally dependent users, as argued by CashlessConsumer’s Srikanth L. 

Even with consent and data-access restrictions, the essential-services dependency of smartphones raises risks of overreach and due-process gaps in contested defaults or wrongful triggers.

Indian Government Flag Security Concerns with WhatsApp Web on Work PCs

 

The Indian government has issued a significant cybersecurity advisory urging citizens to avoid using WhatsApp Web on office computers and laptops, highlighting serious privacy and security risks that could expose personal information to employers and cybercriminals. 

The Ministry of Electronics and Information Technology (MeitY) released this public advisory through its Information Security Awareness (ISEA) team, warning that while accessing WhatsApp Web on office devices may seem convenient, it creates substantial cybersecurity vulnerabilities. The government describes the practice as a "major cybersecurity mistake" that could lead to unauthorized access to personal conversations, files, and login credentials. 

According to the advisory, IT administrators and company systems can gain access to private WhatsApp conversations through multiple pathways, including screen-monitoring software, malware infections, and browser hijacking tools. The government warns that many organizations now view WhatsApp Web as a potential security risk that could serve as a gateway for malware and phishing attacks, potentially compromising entire corporate networks. 

Specific privacy risks identified 

The advisory outlines several "horrors" of using WhatsApp on work-issued devices. Data breaches represent a primary concern, as compromised office laptops could expose confidential WhatsApp conversations containing sensitive personal information. Additionally, using WhatsApp Web on unsecured office Wi-Fi networks creates opportunities for malicious actors to intercept private data.

Perhaps most concerning, the government notes that even using office Wi-Fi to access WhatsApp on personal phones could grant companies some level of access to employees' private devices, further expanding the potential privacy violations. The advisory emphasizes that workplace surveillance capabilities mean employers may monitor browser activity, creating situations where sensitive personal information could be accessed, intercepted, or stored without employees' knowledge. 

Network security implication

Organizations increasingly implement comprehensive monitoring systems on corporate devices, making WhatsApp Web usage particularly risky. The government highlights that corporate networks face elevated vulnerability to phishing attacks and malware distribution through messaging applications like WhatsApp Web. When employees click malicious links or download suspicious attachments through WhatsApp Web on office systems, they could inadvertently provide hackers with backdoor access to organizational IT infrastructure. 

Recommended safety measures

For employees who must use WhatsApp Web on office devices, the government provides specific precautionary guidelines. Users should immediately log out of WhatsApp Web when stepping away from their desks or finishing work sessions. The advisory strongly recommends exercising caution when clicking links or opening attachments from unknown contacts, as these could contain malware designed to exploit corporate networks. 

Additionally, employees should familiarize themselves with their company's IT policies regarding personal application usage and data privacy on work devices. The government emphasizes that understanding organizational policies helps employees make informed decisions about personal technology use in professional environments. 

This advisory represents part of broader cybersecurity awareness efforts as workplace digital threats continue evolving, with the government positioning employee education as crucial for maintaining both personal privacy and corporate network security.

Cybercriminals Escalate Client-Side Attacks Targeting Mobile Browsers

 

Cybercriminals are increasingly turning to client-side attacks as a way to bypass traditional server-side defenses, with mobile browsers emerging as a prime target. According to the latest “Client-Side Attack Report Q2 2025” by security researchers c/side, these attacks are becoming more sophisticated, exploiting the weaker security controls and higher trust levels associated with mobile browsing. 

Client-side attacks occur directly on the user’s device — typically within their browser or mobile application — instead of on a server. C/side’s research, which analyzed compromised domains, autonomous crawling data, AI-powered script analysis, and behavioral tracking of third-party JavaScript dependencies, revealed a worrying trend. Cybercriminals are injecting malicious code into service workers and the Progressive Web App (PWA) logic embedded in popular WordPress themes. 

When a mobile user visits an infected site, attackers hijack the browser viewport using a full-screen iframe. Victims are then prompted to install a fake PWA, often disguised as adult content APKs or cryptocurrency apps, hosted on constantly changing subdomains to evade takedowns. These malicious apps are designed to remain on the device long after the browser session ends, serving as a persistent backdoor for attackers. 

Beyond persistence, these apps can harvest login credentials by spoofing legitimate login pages, intercept cryptocurrency wallet transactions, and drain assets through injected malicious scripts. Some variants can also capture session tokens, enabling long-term account access without detection. 

To avoid exposure, attackers employ fingerprinting and cloaking tactics that prevent the malicious payload from triggering in sandboxed environments or automated security scans. This makes detection particularly challenging. 

Mobile browsers are a favored target because their sandboxing is weaker compared to desktop environments, and runtime visibility is limited. Users are also more likely to trust full-screen prompts and install recommended apps without questioning their authenticity, giving cybercriminals an easy entry point. 

To combat these threats, c/side advises developers and website operators to monitor and secure third-party scripts, a common delivery channel for malicious code. Real-time visibility into browser-executed scripts is essential, as relying solely on server-side protections leaves significant gaps. 

End-users should remain vigilant when installing PWAs, especially those from unfamiliar sources, and treat unexpected login flows — particularly those appearing to come from trusted providers like Google — with skepticism. As client-side attacks continue to evolve, proactive measures on both the developer and user fronts are critical to safeguarding mobile security.

FBI Warns Chrome Users Against Unofficial Updates Downloading

 

If you use Windows, Chrome is likely to be the default browser. Despite Microsoft's ongoing efforts to lure users to the Edge and the rising threat of AI browsers, Google's browser remains dominant. However, Chrome is a victim of its own success. Because attackers are aware that you are likely to have it installed, it is the ideal entry point for them to gain access to your PC and your data. 

That is why you are seeing a series of zero-day alerts and emergency updates. This is also why the FBI is warning about the major threat posed by fraudulent Chrome updates. As part of the "ongoing #StopRansomware effort to publish advisories for network defenders that detail various ransomware variants and ransomware threat actors," the FBI and CISA, America's cyber defence agency, have issued their latest warning. 

The latest advisory addresses the recent rise in Interlock ransomware attacks. And, while the majority of the advice is aimed at individuals in charge of securing corporate networks and enforcing IT policies, it also includes a caution for PC users. Ransomware assaults require an entry point, or "initial access." And if you have a PC (or smartphone) connected to your employer's network, you are affected. The advisory also recommends that organisations "train users to spot social engineering attempts.”

In the case of Interlock, two of these ways of first entrance leverage the same lures that cybercriminals employ to target your personal accounts, as well as the data and security credentials on your own devices. You should be looking for these anyway. One of the techniques is ClickFix, which is easily detectable. This is where a notice or popup encourages you to paste content into a Windows command and run the script. It's accomplished by impersonating a technical issue, a secure website, or a file that you need to open. Any such directive is always an attack and should be ignored. 

Installing and updating fake Chrome has become commonplace, both on Android smartphones and Windows PCs. As with ClickFix, the guidance is quite explicit. Never use links in emails or texts to access upgrades or new installs. Always get updates and programs from the official websites or shops. Keep in mind that Chrome will automatically download updates and will prompt you to restart your browser to ensure the installation. Although those links are delivered to you, you are not required to look for them or click on random links.

Zimperium Warns of Rising Mobile Threats Over Public WiFi During Summer Travel

 

Public WiFi safety continues to be a contentious topic among cybersecurity professionals, often drawing sarcastic backlash on social media when warnings are issued. However, cybersecurity firm Zimperium has recently cautioned travelers about legitimate risks associated with free WiFi networks, especially when vigilance tends to be low. 

According to their security experts, devices are particularly vulnerable when people are on the move, and poorly configured smartphone settings can increase the danger significantly. While using public WiFi isn’t inherently dangerous, experts agree that safety depends on proper practices. Secure connections, encrypted apps, and refraining from installing new software or entering sensitive data on pop-up login portals are essential precautions. 

One of the most critical tips is to turn off auto-connect settings. Even the NSA has advised against automatically connecting to public networks, which can easily be imitated by malicious actors. The U.S. Federal Trade Commission (FTC) generally considers public WiFi safe due to widespread encryption. 

Still, contradictory guidance from other agencies like the Transportation Security Administration (TSA) urges caution, especially when conducting financial transactions on public hotspots. Zimperium takes a more assertive stance, recommending that companies prevent employees from accessing unsecured public networks altogether. Zimperium’s research shows that over 5 million unsecured WiFi networks have been discovered globally in 2025, with about one-third of users connecting to these potentially dangerous hotspots. 

The concern is even greater during peak travel times, as company-issued devices may connect to corporate networks from compromised locations. Airports, cafés, rideshare zones, and hotels are common environments where hackers look for targets. The risks increase when travelers are in a hurry or distracted. Zimperium identifies several types of threats: spoofed public networks designed to steal data, fake booking messages containing malware, sideloaded apps that mimic local utilities, and fraudulent captive portals that steal credentials or personal data. 

These techniques can impact both personal and professional systems, especially when users aren’t paying close attention. Although many associate these threats with international travel, Zimperium notes increased mobile malware activity in several major U.S. cities, including New York, Los Angeles, Seattle, and Miami, particularly during the summer. Staying safe isn’t complicated but does require consistent habits. Disabling automatic WiFi connections, only using official networks, and keeping operating systems updated are all essential steps. 

Using a reputable, paid VPN service can also offer additional protection. Zimperium emphasizes that mobile malware thrives during summer travel when users often let their guard down. Regardless of location—whether in a foreign country or a major U.S. city—the risks are real, and companies should take preventive measures to secure their employees’ devices.

Is Your Bank Login at Risk? How Chatbots May Be Guiding Users to Phishing Scams

 


Cybersecurity researchers have uncovered a troubling risk tied to how popular AI chatbots answer basic questions. When asked where to log in to well-known websites, some of these tools may unintentionally direct users to the wrong places, putting their private information at risk.

Phishing is one of the oldest and most dangerous tricks in the cybercrime world. It usually involves fake websites that look almost identical to real ones. People often get an email or message that appears to be from a trusted company, like a bank or online store. These messages contain links that lead to scam pages. If you enter your username and password on one of these fake sites, the scammer gets full access to your account.

Now, a team from the cybersecurity company Netcraft has found that even large language models or LLMs, like the ones behind some popular AI chatbots, may be helping scammers without meaning to. In their study, they tested how accurately an AI chatbot could provide login links for 50 well-known companies across industries such as finance, retail, technology, and utilities.

The results were surprising. The chatbot gave the correct web address only 66% of the time. In about 29% of cases, the links led to inactive or suspended pages. In 5% of cases, they sent users to a completely different website that had nothing to do with the original question.

So how does this help scammers? Cybercriminals can purchase these unclaimed or inactive domain names, the incorrect ones suggested by the AI, and turn them into realistic phishing pages. If people click on them, thinking they’re going to the right site, they may unknowingly hand over sensitive information like their bank login or credit card details.

In one example observed by Netcraft, an AI-powered search tool redirected users who asked about a U.S. bank login to a fake copy of the bank’s website. The real link was shown further down the results, increasing the risk of someone clicking on the wrong one.

Experts also noted that smaller companies, such as regional banks and mid-sized fintech platforms, were more likely to be affected than global giants like Apple or Google. These smaller businesses may not have the same resources to secure their digital presence or respond quickly when problems arise.

The researchers explained that this problem doesn't mean the AI tools are malicious. However, these models generate answers based on patterns, not verified sources and that can lead to outdated or incorrect responses.

The report serves as a strong reminder: AI is powerful, but it is not perfect. Until improvements are made, users should avoid relying on AI-generated links for sensitive tasks. When in doubt, type the website address directly into your browser or use a trusted bookmark.

DeepSeek Faces Ban From App Stores in Germany

 

DeepSeek, a competitor of ChatGPT, may face legal ramifications in the European Union after the Berlin Commissioner for Data Protection ordered that Google and Apple remove the AI app from their stores. 

After discovering that the DeepSeek app violates the EU's General Data Protection Regulation (GDPR), Berlin Commissioner for Data Protection and Freedom of Information Meike Kamp issued a press release on June 27 urging Google and Apple to take the app down. The action follows Kamp's earlier request that DeepSeek either voluntarily remove its app from Germany or alter its procedures to safeguard the data of German users, neither of which DeepSeek did. 

"The transfer of user data by DeepSeek to China is unlawful. DeepSeek has not been able to provide my office with convincing evidence that data of German users is protected in China at a level equivalent to that of the European Union. Chinese authorities have extensive access rights to personal data held by Chinese companies,” Kamp stated. 

"In addition, DeepSeek users in China do not have enforceable rights and effective legal remedies as guaranteed in the European Union. I have therefore informed Google and Apple, as operators of the largest app platforms, of the violations and expect a prompt review of a blocking.” 

This does not imply that DeepSeek will be removed from the Google Play Store or App Store right away. Apple and Google must consider Kamp's request and choose their course of action. If the app is eventually taken down, it probably won't affect users in other countries; it might only be blocked in Germany or the EU broadly. Despite this, millions of users may be looking for a new favourite AI software, given that DeepSeek had over 50 million downloads on the Google Play Store as of July 2025.

In any case, given this news, some users might wish to get rid of the app altogether. As Kamp's news statement states, "According to its own website, [DeepSeek] processes extensive personal data of users, including all text entries, chat histories, and uploaded files, as well as information about location, devices used, and networks.” 

Users who care about their data privacy, regardless of where they live, should likely be concerned about Kamp's office's increased efforts to have DeepSeek banned in Germany or to have it provide data protection that complies with EU regulations. However, the same could be said for the majority of social media and AI apps.

New Android Feature Detects Fake Mobile Networks

 



In a critical move for mobile security, Google is preparing to roll out a new feature in Android 16 that will help protect users from fake mobile towers, also known as cell site simulators, that can be used to spy on people without their knowledge.

These deceptive towers, often referred to as stingrays or IMSI catchers, are devices that imitate real cell towers. When a smartphone connects to them, attackers can track the user’s location or intercept sensitive data like phone calls, text messages, or even the phone's unique ID numbers (such as IMEI). What makes them dangerous is that users typically have no idea their phones are connected to a fraudulent network.

Stingrays usually exploit older 2G networks, which lack strong encryption and tower authentication. Even if a person uses a modern 4G or 5G connection, their device can still switch to 2G if the signal is stronger opening the door for such attacks.

Until now, Android users had very limited options to guard against these silent threats. The most effective method was to manually turn off 2G network support—something many people aren’t aware of or don’t know how to do.

That’s changing with Android 16. According to public documentation on the Android Open Source Project, the operating system will introduce a “network security warning” feature. When activated, it will notify users if their phone connects to a mobile network that behaves suspiciously, such as trying to extract device identifiers or downgrade the connection to an unsecured one.

This feature will be accessible through the “Mobile Network Security” settings, where users can also manage 2G-related protections. However, there's a catch: most current Android phones, including Google's own Pixel models, don’t yet have the hardware required to support this function. As a result, the feature is not yet visible in settings, and it’s expected to debut on newer devices launching later this year.

Industry observers believe that this detection system might first appear on the upcoming Pixel 10, potentially making it one of the most security-focused smartphones to date.

While stingray technology is sometimes used by law enforcement agencies for surveillance under strict regulations, its misuse remains a serious privacy concern especially if such tools fall into the wrong hands.

With Android 16, Google is taking a step toward giving users more control and awareness over the security of their mobile connections. As surveillance tactics become more advanced, these kinds of features are increasingly necessary to protect personal privacy.

Here's Why Using SMS Two-Factor Authentication Codes Is Risky

 

We've probably all received confirmation codes via text message when trying to enter into an account. These codes are intended to function as two-factor verification, confirming our identities and preventing cybercriminals from accessing our accounts solely through a password. But who handles the SMS codes, and can they be trusted? 

 New findings from Bloomberg and the collaborative investigative newsroom Lighthouse findings offer insight on how and why text-based codes might put people in danger. In their investigations, both organisations stated that they got at least a million data packets from a phone company whistleblower. Individual users got the packets, which contained SMS texts with two-factor authentication codes. 

You may believe that these messages are handled directly by the companies and websites with which you have an account. However, Bloomberg and Lighthouse's investigation suggests that this is not always the case. In this case, the messages went through a contentious Swiss company called Fink Telecom Services. And Bloomberg used the label "controversial" to describe Fink for a reason. 

"The company and its founder have worked with government spy agencies and surveillance industry contractors to surveil mobile phones and track user location. Cybersecurity researchers and investigative journalists have published reports alleging Fink's involvement in multiple instances of infiltrating private online accounts,” Bloomberg reported. 

Of course, Fink Telecom didn't exactly take that and other comments lying down. In a statement shared with ZDNET, Fink called out the article: "A simple reading of this article reveals that it presents neither new findings nor original research," Fink noted in its statement. "Rather, it is largely a near-verbatim repetition of earlier reports, supplemented by selective and out-of-context insinuations intended to create the appearance of a scandal-without providing any substantiated factual basis.”

Bloomberg and Lighthouse discovered that the senders included major tech companies including Google, Meta, and Amazon. Several European banks were also involved, as were applications like Tinder and Snapshot, the Binance cryptocurrency market, and even encrypted communication apps like Signal and WhatsApp. 

Why would businesses leave their two-factor authentication codes to an outside source, especially one with a questionable reputation? Convenience and money. External contractors can normally handle these types of SMS messages at a lower cost and with greater ease than enterprises themselves. That is especially true if a company has to interact with clients all around the world, which can be complicated and costly. 

Instead, firms turn to providers like Fink Telecom for access to "global titles." A global title is a network address that allows carriers to interact between countries. This makes it appear that a company is headquartered in the same country as any of its consumers. 

According to Lighthouse's investigation, Fink utilised worldwide titles in Namibia, Chechnya, the United Kingdom, and his native Switzerland. Though outsourcing such messages can be convenient, it carries risks. In April, UK phone regulator Ofcom banned global title leasing for UK carriers, citing the risk to mobile phone users. 

The key issue here is whether the data in the documents examined by Bloomberg and Lighthouse was ever at risk. In an interview with Bloomberg, Fink Telecom CEO Andreas Fink stated: "Our company offers infrastructure and technical services, such as signalling and routing capabilities. We do not analyse or meddle with the traffic sent by our clients or their downstream partners. 

Fink further shared the following statement with ZDNET: "Fink Telecom Services GmbH has always acted transparently and cooperatively with the authorities," Fink said. "Legal opinions and technical documentation confirm that the company's routing services are standardized, internationally regulated, and do not require authorization under Swiss telecommunications law, export control law, or sanctions legislation. Authorities were also informed that the company is in no way involved in any misuse of its services.”

In terms of outsourcing, Google, Meta, Signal, and Binance informed Bloomberg that they did not deal directly with Fink Telecom. Google also stated that it was discontinuing the use of SMS to authenticate accounts, although Signal stated that it provided solutions to SMS vulnerabilities. A Meta representative told Bloomberg that the company has warned its partners not to do business with Fink Telecom.

Signs Your Phone Has a Virus and How to Remove It Safely

 

In today’s world, our phones are more than just communication devices — they’re essential for work, banking, shopping, and staying connected. That makes it all the more alarming when a device begins to behave strangely. 

One possible cause? A virus. Mobile malware can sneak into your phone through suspicious links, shady apps, or compromised websites, and can create problems ranging from poor performance to data theft and financial loss. There are several red flags that suggest your phone might be infected. A rapidly draining battery could mean malicious software is operating in the background. Overheating, sluggish performance, frequent app crashes, or screen freezes may also be signs of trouble. You might notice strange new apps that you don’t remember installing or unexpected spikes in mobile data usage. 
In some cases, your contacts could receive strange messages from you, or you might find purchases on your accounts that you never made. If your phone shows any of these symptoms, quick action is essential. 

The first step is to scan your device using a trusted antivirus app to locate and remove threats. Check your device for unfamiliar apps and uninstall anything suspicious. You should also notify your contacts that your device may have been compromised to prevent the spread of malware through messaging apps. Updating your passwords should be your next priority. Make sure each password is strong, unique, and ideally protected with two-factor authentication. After that, review your online accounts and connected devices for signs of unauthorized activity. Remove unknown devices from your phone account settings and confirm your personal and security information hasn’t been altered. 

Depending on your phone’s operating system, the process of virus removal can vary slightly. iPhone users can try updating to the latest iOS version and removing suspicious apps. If the problem persists, a factory reset might be necessary, though it will erase all stored data unless a backup is available. While iPhones don’t include a built-in virus scanner, some reliable third-party tools can help detect infections. For Android users, antivirus apps often offer both detection and removal features. Rebooting the device in safe mode can temporarily disable harmful third-party apps and make removal easier. Clearing the browser cache and cookies is another useful step to eliminate web-based threats. 

If all else fails, a factory reset can clear everything, but users should back up their data beforehand. Preventing future infections comes down to a few key practices. Always download apps from official stores, keep your operating system and apps updated, and limit app permissions. Avoid clicking on links from unknown sources, and monitor your phone’s performance regularly for anything out of the ordinary. 

Whether you use Android or iPhone, dealing with a virus can be stressful — but with the right steps, it’s usually possible to remove the threat and get your phone back to normal. By staying alert and adopting good digital hygiene, you can also reduce your chances of being targeted again in the future.

Reddit Sues Anthropic for Training Claude AI with User Content Without Permission

 

Reddit, a social media site, filed a lawsuit against Anthropic on Wednesday, claiming that the artificial intelligence firm is unlawfully "scraping" millions of Reddit users' comments in order to train its chatbot Claude. 

Reddit alleges that Anthropic "intentionally trained on the personal data of Reddit users without ever requesting their consent" and utilised automated bots to access Reddit's material in spite of being requested not to. 

In a response, Anthropic stated that it "will defend ourselves vigorously" against Reddit's allegations. Reddit filed the complaint Wednesday in California Superior Court in San Francisco, where both firms are headquartered.

“AI companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data,” noted Ben Lee, Reddit’s chief legal officer, in a statement Wednesday.

Reddit has previously entered into licensing deals with Google, OpenAI, and other companies who pay to train their AI systems on Reddit's over 100 million daily users' public comments. 

The contracts "enable us to enforce meaningful protections for our users, including the right to delete your content, user privacy protections, and preventing users from being spammed using this content," according to Lee. 

The license agreements also helped the 20-year-old internet platform acquire funds ahead of its Wall Street debut as a publicly traded business last year. Former OpenAI executives founded Anthropic in 2021, and its primary chatbot, Claude, remains a prominent competitor to OpenAI's ChatGPT. While OpenAI has close relationships with Microsoft, Anthropic's principal commercial partner is Amazon, which is utilising Claude to develop its popular Alexa voice assistant. 

Anthropic, like other AI businesses, has relied extensively on websites like Wikipedia and Reddit, which contain vast troves of written material that can help an AI assistant learn the patterns of human language.

In a 2021 paper co-authored by Anthropic CEO Dario Amodei, which was cited in the lawsuit, the company's researchers identified the subreddits, or subject-matter forums, that contained the highest quality AI training data, such as those focused on gardening, history, relationship advice, or shower thoughts. 

In 2023, Anthropic stated in a letter to the United States Copyright Office that the "way Claude was trained qualifies as a quintessentially lawful use of materials," by making copies of information to do a statistical analysis on a big dataset. It is already facing a lawsuit from major music companies who claim Claude regurgitates the lyrics of copyrighted songs.

However, Reddit's lawsuit differs from others filed against AI companies in that it does not claim copyright violation. Instead, it focusses on the alleged breach of Reddit's terms of service, which it claims resulted in unfair competition.

TSA Advises Against Using Airport USB Ports to Charge Your Phone

 

So-called juice jacking is the most controversial topic in cybersecurity circles. In most years, when a new alert is issued by a government agency before the holidays, it creates new headlines. Stories are written and cyber eyebrows are raised — there are more stories than attacks. But still those stories come. However, a recent alert raises the possibility that travellers may actually be at risk.

In reality, juice jacking occurs when you plug your phone into a public charging cable or socket at a hotel or airport, and rather than a dumb charger, a computer operates in the background to retrieve data from your device. This is not the same as carefully designed attack cables that contain a malicious payload inside the cable.

The latest official warning (and headlines 1,2) comes from the TSA. "When you're at an airport, do not plug your phone directly into a USB port," it warns you. "Bring your TSA-compliant power brick or battery pack and plug in there." This is so because "hackers can install malware at USB ports (we've been told that's called 'juice/port jacking').” 

TSA also urges smartphone users not to use free public WiFi, especially if they intend to make any online purchases. Do not enter any sensitive information while using unsecure WiFi. Cyber experts are almost as divided on the public WiFi hijacking problem as they are on juice-jacking. TL;DR: While it compromises your location, all encrypted data transmitted to or from your device via websites or apps should be secure.

The greater risk is downloading an app from the malicious access point's splash page, filling online forms, or being routed to bogus login sites for Microsoft, Google, or other accounts. The typical advice applies: use passkeys, avoid logging in to linked or popup windows and instead utilise the traditional channels, and do not reveal personal information. You should also be cautious about which WiFi hotspots you connect to - are they legitimate services from the hotel, airport, or mall, or are they cleverly labelled fakes? 

This is more of an issue for Android than iOS, but it isn't something most people need be concerned about. However, if you believe you may be the target of an attack or if you travel to high-risk areas of the world, I strongly advise against utilising public charging outlets or public WiFi without some type of data protection.

WhatsApp Launches First Dedicated iPad App with Full Multitasking and Calling Features

 

After years of anticipation, WhatsApp has finally rolled out a dedicated iPad app, allowing users to enjoy the platform’s messaging capabilities natively on Apple’s tablet. Available now for download via the App Store, this new version is built to take advantage of iPadOS’s multitasking tools such as Stage Manager, Split View, and Slide Over, marking a major step forward in cross-device compatibility for the platform. 

Previously, iPad users had to rely on WhatsApp Web or third-party solutions to access their chats on the tablet. These alternatives lacked several core functionalities and offered limited support for features like voice and video calls. With this release, users can now sync messages across devices, initiate calls, and send media from their iPad with the same ease and security offered on the iPhone app. 

In its official blog post, WhatsApp highlighted how the new app enhances productivity and communication. Users can, for instance, participate in group calls while researching online or send messages during video meetings — all within the multitasking-friendly iPad interface. The app also supports accessories like Apple’s Magic Keyboard and Apple Pencil, further streamlining the messaging experience. The absence of an iPad-specific version until now had often puzzled users, especially given WhatsApp’s massive global user base and Meta’s (formerly Facebook) ownership since 2014. 

Although the iPhone version has long dominated mobile messaging, WhatsApp never clarified why a tablet version wasn’t prioritized — despite the iPad being one of the most popular tablets worldwide. This launch now allows users to take full advantage of WhatsApp’s ecosystem on a larger screen without needing workarounds. Unlike WhatsApp Web, the new native app can access the device’s cameras and offer a richer interface for media sharing and video calls. 

With this, WhatsApp fills a major gap in its product offering and joins competitors like Telegram, which has long offered a native iPad experience. Interestingly, WhatsApp’s tweet teasing the launch included a playful emoji in response to a user request, generating buzz before the official announcement. In contrast, Telegram jokingly responded with a tweet poking fun at the delayed release.

With over 3 billion active users globally — including more than 500 million in India — WhatsApp’s move to embrace the iPad platform marks a significant upgrade in its commitment to universal accessibility and user experience.

Vietnam Blocks Telegram Messaging App

 

Vietnam's technology ministry has ordered telecommunications service providers to ban the messaging app Telegram for failing to cooperate in the investigation of alleged crimes committed by its users, a move Telegram described as shocking.

In a document dated May 21 and signed by the deputy head of the telecom department at the technology ministry, telecommunications firms were asked to start steps to block Telegram and report back to the ministry by June 2. 

In the document seen by Reuters, the ministry was acting on behalf of the nation's cybersecurity department after police revealed that 68% of Vietnam's 9,600 Telegram channels and groups were breaking the law. They cited drug trafficking, fraud, and "cases suspected of being related to terrorism" as some of the illicit activities conducted through the app. 

According to the document, the ministry requested that telecom companies "deploy solutions and measures to prevent Telegram's activities in Vietnam.” Following the release of the Reuters piece, the government announced the measures against Telegram on its website. 

"Telegram is surprised by those statements. We have responded to legal requests from Vietnam on time. This morning, we received a formal notice from the Authority of Communications regarding a standard service notification procedure required under new telecom regulations. The deadline for the response is May 27, and we are processing the request," the Telegram representative noted. 

According to a technology ministry official, the move was prompted by Telegram's failure to share customer information with the government when requested as part of criminal investigations.

The Vietnamese police and official media have regularly cautioned citizens about potential crimes, frauds, and data breaches on Telegram channels and groups. Telegram, which competes globally with major social networking apps such as Facebook's (META.O), WhatsApp and WeChat, remain available in Vietnam on Friday. 

Vietnam's ruling Communist Party maintains strict media censorship and tolerates minimal opposition. The country has regularly asked firms such as Facebook, Google (GOOGL.O), YouTube, and TikTok to work with authorities to remove "toxic" data, which includes offensive, misleading, and anti-state content. 

According to the document, Telegram has been accused of failing to comply with regulations requiring social media platforms to monitor, remove, and restrict illegal content. "Many groups with tens of thousands of participants were created by opposition and reactionary subjects spreading anti-government documents" based on police information. 

The free-to-use site, which has about 1 billion users globally, has been embroiled in scandals over security and data breaches, particularly in France, where its founder, Pavel Durov, was temporarily detained last year.