Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Amazon Says It Has Disrupted GRU-Linked Cyber Operations Targeting Cloud Customers

 



Amazon has announced that its threat intelligence division has intervened in ongoing cyber operations attributed to hackers associated with Russia’s foreign military intelligence service, the GRU. The activity targeted organizations using Amazon’s cloud infrastructure, with attackers attempting to gain unauthorized access to customer-managed systems.

The company reported that the malicious campaign dates back to 2021 and largely concentrated on Western critical infrastructure. Within this scope, energy-related organizations were among the most frequently targeted sectors, indicating a strategic focus on high-impact industries.

Amazon’s investigation shows that the attackers initially relied on exploiting security weaknesses to break into networks. Over multiple years, they used a combination of newly discovered flaws and already known vulnerabilities in enterprise technologies, including security appliances, collaboration software, and data protection platforms. These weaknesses served as their primary entry points.

As the campaign progressed, the attackers adjusted their approach. By 2025, Amazon observed a reduced reliance on vulnerability exploitation. Instead, the group increasingly targeted customer network edge devices that were incorrectly configured. These included enterprise routers, VPN gateways, network management systems, collaboration tools, and cloud-based project management platforms.

Devices with exposed administrative interfaces or weak security controls became easy targets. By exploiting configuration errors rather than software flaws, the attackers achieved the same long-term goals: maintaining persistent access to critical networks and collecting login credentials for later use.

Amazon noted that this shift reflects a change in operational focus rather than intent. While misconfiguration abuse has been observed since at least 2022, the sustained emphasis on this tactic in 2025 suggests the attackers deliberately scaled back efforts to exploit zero-day and known vulnerabilities. Despite this evolution, their core objectives remained unchanged: credential theft and quiet movement within victim environments using minimal resources and low visibility.

Based on overlapping infrastructure and targeting similarities with previously identified threat groups, Amazon assessed with high confidence that the activity is linked to GRU-associated hackers. The company believes one subgroup, previously identified by external researchers, may be responsible for actions taken after initial compromise as part of a broader, multi-unit campaign.

Although Amazon did not directly observe how data was extracted, forensic evidence suggests passive network monitoring techniques were used. Indicators included delays between initial device compromise and credential usage, as well as unauthorized reuse of legitimate organizational credentials.

The compromised systems were customer-controlled network appliances running on Amazon EC2 instances. Amazon emphasized that no vulnerabilities in AWS services themselves were exploited during these attacks.

Once the activity was detected, Amazon moved to secure affected instances, alerted impacted customers, and shared intelligence with relevant vendors and industry partners. The company stated that coordinated action helped disrupt the attackers’ operations and limit further exposure.

Amazon also released a list of internet addresses linked to the activity but cautioned organizations against blocking them without proper analysis, as they belong to legitimate systems that had been hijacked.

To mitigate similar threats, Amazon recommended immediate steps such as auditing network device configurations, monitoring for credential replay, and closely tracking access to administrative portals. For AWS users, additional measures include isolating management interfaces, tightening security group rules, and enabling monitoring tools like CloudTrail, GuardDuty, and VPC Flow Logs.

Hypervisor Ransomware Attacks Surge as Threat Actors Shift Focus to Virtual Infrastructure

 

Hypervisors have emerged as a highly important, yet insecure, component in modern infrastructural networks, and attackers have understood this to expand the reach of their ransomware attacks. It has been observed by the security community that the modes of attack have changed, where attackers have abandoned heavily fortified devices in favor of the hypervisor, the platform through which they have the capability to regulate hundreds of devices at one time. In other words, a compromised hypervisor forms a force multiplier in a ransomware attack. 

Data from Huntress on threat hunting indicates the speed at which this trend is gathering pace. Initially in the early part of 2025, hypervisors were involved in just a few percent of ransomware attacks. However, towards the latter part of the year, this number had risen substantially, with hypervisor-level encryption now contributing towards a quarter of these attacks. This is largely because the Akira ransomware group is specifically leveraging vulnerabilities within virtualized infrastructure.  

Hypervisors provide attackers the opportunity by typically residing outside the sight of traditional security software. For this reason, bare-metal hypervisors are of particular interest to attackers since traditional security software cannot be set up on these environments. Attacks begin after gaining root access, and the attackers will be able to encrypt the disks on the virtual machines. Furthermore, attackers will be able to use the built-in functions to execute the encryption process without necessarily setting up the ransomware. 

In this case, security software would be rendered unable to detect the attacks. These attacks often begin with loopholes in credentials and network segmentation. With the availability of Hypervisor Management Interfaces on the larger internets inside organizations, attackers can launch lateral attacks when they gain entry and gain control of the virtualization layer. Misuse of native management tools has also been discovered by Huntress for adjusting Machine Settings, degrading defenses, and preparing the environment for massive Ransomware attacks. 

Additionally, the increased interest in hypervisors has emphasized that this layer must be afforded the equivalent security emphasis on it as for servers and end-points. Refined access controls and proper segmentation of management networks are required to remediate this. So too is having current and properly maintained patches on this infrastructure, as it has been shown to have regularly exploited vulnerabilities for full administrative control and rapid encryption of virtualized environments. While having comprehensive methods in place for prevention, recovery planning is essential in this scenario as well. 

A hypervisor-based ransomware is meant for environments, which could very well go down, hence the need for reliable backups, ideally immutables. This is especially true for organizations that do not have a recovery plan in place. As ransomware threats continue to evolve and become more sophisticated, the role of hypervisors has stepped up to become a focal point on the battlefield of business security. 

This is because by not securing and protecting the hypervisor level against cyber threats, what a business will essentially present to the cyber attackers is what they have always wanted: control of their whole operation with a mere click of their fingers.

Cellik Android Spyware Exploits Play Store Trust to Steal Data

 

Recently found in the Android platform, remote access trojan named Cellik has been recognized as a serious mobile threat, using the Google Play integration feature to mask itself within legitimate applications to evade detection by security solutions.

Cellik is advertised as a malware-as-a-service (MaaS) in the cybercrime forums, with membership rates beginning at approximately $150 a month. One of the most frightening facets of the malware is the fact that it allows malicious payloads to be injected into legitimate Google Play applications, which can be easily installed. 

Once it is installed, Cellik provides complete control over the target device for the attacker. Operators can remotely stream the target device’s screen live, as well as access all files, receive notifications, and even use a stealthy browser to surf websites and enter form data without the target’s awareness. The malware also comes equipped with an app inject functionality that enables attackers to superimpose login screens on normal applications such as bank or email apps and harvest login and other sensitive data. 

Cellik Play Store integration also includes an automated APK builder, so the perpetrators of this crimeware can now browse the store for apps, choose popular apps, and pack them with the Cellik payload in one click bundling it together with the cellik payload. The perpetrators of this attack claim that this allows them to bypass Google Play Protect and other device-based security scanners, but Google has not independently verified this. 

Android users should heed the words of security experts and not sideload APKs from unknown sources, keep Play Protect enabled at all times, be very judicious about app permissions, and keep an eye out for anything strange on their phones that might be harmful. Since Cellik is a groundbreaking new development in Android malware, both users and the security community should be vigilant to ensure their sensitive data and device integrity are not compromised.

Microsoft Users Warned as Hackers Use Typosquatting to Steal Login Credentials

 

Microsoft account holders are being urged to stay vigilant as cybercriminals increasingly target them through a deceptive tactic known as typosquatting. Attackers are registering look-alike websites and email addresses that closely resemble legitimate Microsoft domains, with the goal of tricking users into revealing their passwords.

Harley Sugarman, CEO of Anagram Security, recently highlighted this risk by sharing a screenshot of a phishing email he received that used this method. In the sender’s address, the letter “m” was cleverly replaced with an “r” and an “n,” creating a nearly identical visual match. Because the difference is subtle, many users may not notice the change and could easily be misled.

Typosquatting itself is not a new cybercrime technique. For years, hackers and online fraudsters have relied on it to exploit small typing errors or momentary lapses in attention. The strategy involves purchasing domains or email addresses that closely mimic real ones, hoping users will accidentally visit or click them. Once there, victims are often presented with fake login pages designed to look authentic. Any credentials entered are then captured and sent directly to the attackers.

A major reason this tactic continues to succeed is that many people don’t take time to carefully inspect URLs or sender addresses. A single incorrect character in a link or email can redirect users to a convincing replica of a legitimate site, where usernames and passwords are harvested without suspicion.

To reduce the risk of falling victim, security experts recommend switching to passkeys wherever possible, as they are significantly more secure than traditional passwords. Microsoft and other tech companies have been actively encouraging this shift. For users who can’t yet adopt passkeys, strong and unique passwords—or long passphrases—are essential, ideally stored and autofilled using a reputable password manager.

Additional protection measures include enabling browser safeguards. Both Microsoft Edge and Google Chrome can flag suspicious or mistyped URLs if these features are turned on. Bookmarking frequently used websites, such as email services, banking platforms, shopping portals, and social media accounts, can also help ensure you’re visiting the correct destination.

Standard phishing precautions remain just as important. Be skeptical of unexpected emails claiming there’s an issue with your account. Instead of clicking links, log in through a trusted, independent method to verify any alerts. Avoid downloading attachments or replying to unsolicited messages, as engagement can signal to scammers that your account is active.

Carefully reviewing sender email addresses, hovering over links to preview their destinations, and watching for messages that create urgency—such as demands to immediately reset a password—can help identify phishing attempts. Using reliable antivirus software adds another layer of defense against malware and other online threats.

Although typosquatting is one of the oldest scams in cybersecurity, it continues to resurface because it preys on simple mistakes. Staying alert while browsing unfamiliar websites or checking your inbox remains one of the most effective ways to stay safe

UK Report Finds Rising Reliance on AI for Emotional Wellbeing

 


Artificial intelligence (AI) is being used to make more accurate predictions about the future and its effects on these predictions are being documented in new research from the United Kingdom's AI Security Institute. These findings reveal an extraordinary evolution in how the technology is being used compared to how it was used in the past. 

The government-backed research indicates that nearly one in three British adults now rely on artificial intelligence for emotional reassurance or social connection. The study involved testing more than 30 unnamed chatbot platforms across a range of disciplines such as national security, scientific reasoning and technological ability over a period of two years. 

It was found in the institute's first study of its kind that a smaller but significant segment of its population, approximately one in 25 respondents, regularly engages with these tools on a daily basis for companionship or emotional support, demonstrating that Artificial Intelligence is becoming increasingly mainstream in both personal lives. An in-depth survey of over 2,000 adults was used as the basis for the study. 

The research concluded that users were primarily comforted by conversational artificial intelligence systems such as OpenAI's ChatGPT and Mistral, a French company. This signals a wider cultural shift in which chatbots are no longer viewed only as digital utilities, but as informal confidants for millions who deal with loneliness, emotional vulnerability, and desire consistency of communication. 

Having been published as part of the AI Security Institute's inaugural Frontier AI Trends Report, the research marks the first comprehensive effort by the UK government to assess both the technical frontiers as well as the real-world impact of advanced AI models, which represents an important milestone in the development of AI. 

Founded in 2023 to help guide the national understanding of the risks associated with artificial intelligence, its system capabilities, as well as its broader societal implications, the institute has conducted a two-year structured evaluation of more than 30 breakthrough models of artificial intelligence, blending rigorous technical testing with behavioural insights into their adoption by the general public. 

It is true that the report emphasizes the importance of high-risk domains—such as cyber capability assessments, safety safeguards, national security resilience, and concerns about erosion of human oversight—but it also documents what is referred to as an “early signs of emotional impact on users,” a dimension that was previously considered secondary in government AI evaluations of AI systems. 

A survey of 2,028 UK adults conducted over the past year indicated that more than one-third of those surveyed used artificial intelligence for emotional support, companionship, or sustained social interaction, based on data from the census. 

In particular, the study indicates that engagement extends beyond intermittent experimentation, with 8 percent indicating that they rely on artificial intelligence for emotional and conversational needs every week, and 4 percent that they use it every day. It is pointed out that chat-driven artificial intelligence, as well as serving as an analytical instrument as well as a consistent conversational presence for a growing subset of the population, has taken on a new role in personal routines that was unanticipated.

The AI Security Institute’s research aims to assess not only the increasing emotional footprint of AI systems, but also the broader threats that emerge as frontier AI systems progressively become more powerful. There is a considerable amount of attention paid to cyber security—as there is persistent concern that artificial intelligence could be used to scale digital attacks—but the report emphasizes that it can be used to reinforce national defences, as well as to strengthen systems' resilience against intrusion. 

Based on research conducted by a leading research institute, it has been found that certain artificial intelligence models are becoming more capable of identifying and exploiting security vulnerabilities at an unprecedented rate, with performance benchmarks indicating that their capabilities are doubling approximately every 8 months, which is an astounding rate of improvement.

A collection of experiments have shown that artificial intelligence systems are capable of executing expert-level cyber security tasks normally associated with the expertise of more than a decade of specialized human experience, including operations that normally require upwards of ten years of training and experience. 

Moreover, the nature of scientific performance was also shown to be advancing rapidly from by 2025 assessments, which noted that AI models had already surpassed PhD-level human experts in the field of biology, and did not lag far behind in chemistry and applied chemistry, where the gap was narrowing rapidly. 

Additionally, the institute explored as well long-theorized concerns regarding the loss of control over advanced AI by humans, which was popularized in science fiction but is now regarded with a measurable degree of seriousness by a very large segment of the scientific community as a matter of growing concern. 

Laboratory testing has shown that certain AI models are capable of displaying the foundational characteristics associated with early self-replication behavior, including the ability to attempt isolated tasks such as navigating know-your-customer verification steps associated with financial service access. In order to determine whether AI could theoretically secure computing resources that are required to host model copies, this was a way to test the feasibility of AI securing those resources. 

It is important to note that researchers concluded that current systems lack the capability of completing the necessary chain of actions while remaining undetected during a real-world situation, which is a crucial limitation in any plausible self-replication scenario. In addition, the institute examined whether models would intentionally conceal their advanced capabilities during testing, an activity known as "sandbagging."

However, no evidence of active deception has been found in evaluations, although lab conditions indicated such behavior can technically occur. Anthropic, an AI company, reported in May 2025 that it experienced an incident in which an artificial intelligence model exhibited coercive, blackmail-adjacent behavior while being simulated under self-preservation pressure. In response, the public debate about the risks of so-called “rogue AI” intensified. 

Even so, the institute explains that there is still considerable controversy within the research community regarding how serious and likely these threats are, with many experts arguing that existential risk narratives surrounding uncontrolled AI are overstated and not supported by empirical evidence. In fact, the findings reflect a landscape where consensus is not the defining factor, but rather competing interpretations of how to govern, assess, and understand fast-advancing AI. 

With the increasing use of artificial intelligence (AI) for emotional support, renewed debate has been sparked about how human interaction is changing, and how the mental health system in the UK is being squeezed. A study by the AI Security Institute has shown that users rely on AI companions primarily because they are always available, private, and affordable. These are characteristics that traditional channels of support cannot always provide, due to the limitations of time and cost. 

A conversational AI tool can be accessed 24/7, enabling individuals to seek dialogue and reassurance even when they are feeling distressed or isolated, even in times when human help may not be available. There is also a sense of anonymity that appears to influence users' willingness to communicate their personal struggles to an interface that is not judgemental, with some users expressing their struggles more freely than they would with another individual. 

AI engagement is also influenced by considerations of cost, as it remains significantly cheaper than formal therapeutic services, making it an attractive option for those who need regular conversation or interim assistance, but it is not an option for everyone. In spite of this, psychologists and artificial intelligence specialists caution against the idea that increased usage is evidence that machines are capable of reproducing the human elements essential to providing care.

Experts acknowledge that AI is remarkably capable of offering immediacy and continuity in therapy, but they point out that it cannot replace the emotional attunement, empathy, and knowledge of relationships that define effective therapeutic relationships. 

A number of practitioners are advocating for a blended approach, under which artificial intelligence (AI) tools can complement existing services by bridging short-term care gaps and increasing accessibility, without displacing clinical relationships or human-led interventions. 

A report by the institute points out that the long-term effects of artificial intelligence on emotional wellbeing will not be determined only by adoption, but will also depend on how it is incorporated into current mental health support systems with a level of responsibility that ensures technological convenience increases access to care without diminishing the human connection at the heart of mental health care. 

A key aspect of the AI Security Institute's evaluations also relates to one of the government's top safety priorities, which is the potential for advanced AI models to seek self-replication, a capability that could, theoretically, allow an autonomous system to distribute copies of itself across multiple devices, making it much harder to manage or contain. 

It has been shown through laboratory testing that two frontier models demonstrate a self-replication rate in excess of 60 percent under structured experimental conditions based on controlled self-replication. It is worth noting, however, that the institute has confirmed that no model has unsupervised or organically attempted to replicate and no model has spontaneously concealed its true capabilities during the assessments as well. 

A real-world attempt to replicate itself in uncontrolled environments would be highly unlikely to succeed, according to the researchers, which cited existing operational limitations and detection challenges. Furthermore, there was a review of sandbagging, which is a strategic strategy that suppresses capability during evaluations. 

Several AI systems were found to mimic subdued performance when explicitly instructed, but there did not appear to be any indication that such behaviour could emerge independently during testing. Moreover, the institute also recorded significant progress when it comes to AI safety guidelines, specifically those pertaining to restricting biological abuse. 

The researchers were able to compare two penetration tests conducted six months apart, and found that it took about 10 minutes to breach security safeguards during the first test, while bypassing security safeguards during the second test took around seven hours. There has been an increase in the resilience of models against biological exploitation that the institute says is a sign of rapid improvements in model resilience. 

Furthermore, the institute's findings also demonstrate that artificial intelligence has become increasingly autonomous, with agents capable of executing complex, high-risk digital operations – such as asset transfers and simulations of financial services – without continuous human input. The researchers claim that artificial intelligence models are already rivalling, and in some instances surpassing, highly trained human specialists, which is making the possibility that Artificial General Intelligence might be possible in the future even more plausible. 

Taking into account the current pace of progress, the institute described it as "extraordinary." It noted that AI systems are able to perform progressively more complex and time-consuming tasks without direct supervision as a result of a steady increase in both complexity and duration, a trend which continues to re-define assumptions about machine capability, governance, and whether humans should be involved at critical points in a decision-making process. 

A broader recalibration of society's relationship with machine intelligence is being reflected in the AI Security Institute's findings that go beyond a shift in usage. As observers point out, we must be sure that the next phase of AI adoption will focus on fostering public trust by ensuring that safety outcomes are measurable, ensuring that regulatory frameworks are clear, and engaging in proactive education concerning both the benefits and limitations of the technology. 

According to mental health professionals, national care strategies should include structured AI-assisted support pathways accompanied by professional oversight to bridge accessibility gaps, while retaining the importance of human connection. Cyber specialists emphasize that defensive AI applications should be accelerated as well, not merely researched in order to make sure the technology strengthens digital infrastructure in a way that it can challenge faster. 

Regardless of the shape of policy that government bodies continue to create, experts are recommending independent safety audits, emotional-impact monitoring standards, and public awareness campaigns to empower users to engage responsibly with artificial intelligence, recognize AI's limits, and seek human intervention when necessary, based on the consensus among analysts as a pragmatic rather than alarmist view. AI can have transformative potential, but only if it is deployed in a way that is accountable, overseen, and ethically designed will it be able to reap its benefits. 

The fact that artificial intelligence has not been on society's doorstep for so long as 2025 proves is that it is already seated in the living room of everyone. AI is already influencing conversations, decisions, and vulnerabilities alike. It will be the UK's choice whether AI becomes a silent crutch or a powerful catalyst for national resilience and human wellbeing as it chooses to steer it next.

FCC Tightens Rules on Foreign-Made Drones to Address U.S. Security Risks



The U.S. Federal Communications Commission has introduced new restrictions targeting drones and essential drone-related equipment manufactured outside the United States, citing concerns that such technology could pose serious national security and public safety risks.

Under this decision, the FCC has updated its Covered List to include uncrewed aircraft systems and their critical components that are produced in foreign countries. The move is being implemented under authority provided by recent provisions in the National Defense Authorization Act. In addition to drones themselves, the restrictions also apply to associated communication and video surveillance equipment and services.

The FCC explained that while drones are increasingly used for legitimate purposes such as innovation, infrastructure monitoring, and public safety operations, they can also be misused. According to the agency, malicious actors including criminals, hostile foreign entities, and terrorist groups could exploit drone technology to conduct surveillance, disrupt operations, or carry out physical attacks.

The decision was further shaped by an assessment carried out by an interagency group within the Executive Branch that specializes in national security. This review concluded that certain foreign-produced drones and their components present unacceptable risks to U.S. national security as well as to the safety and privacy of people within the country.

Officials noted that these risks include unauthorized monitoring, potential theft of sensitive data, and the possibility of drones being used for disruptive or destructive activities over U.S. territory. Components such as data transmission systems, navigation tools, flight controllers, ground stations, batteries, motors, and communication modules were highlighted as areas of concern.

The FCC also linked the timing of the decision to upcoming large-scale international events that the United States is expected to host, including the 2026 FIFA World Cup and the 2028 Summer Olympics. With increased drone activity likely during such events, regulators aim to strengthen control over national airspace and reduce potential security threats.

While the restrictions emphasize the importance of domestic production, the FCC clarified that exemptions may be granted. If the U.S. Department of Homeland Security determines that a specific drone or component does not pose a security risk, it may still be allowed for use.

The agency also reassured consumers that the new rules do not prevent individuals from continuing to use drones they have already purchased. Retailers are similarly permitted to sell and market drone models that received government approval earlier this year.

This development follows the recent signing of the National Defense Authorization Act for Fiscal Year 2026 by U.S. President Donald Trump, which includes broader measures aimed at protecting U.S. airspace from unmanned aircraft that could threaten public safety.

The FCC’s action builds on earlier updates to the Covered List, including the addition of certain foreign technology firms in the past, as part of a wider effort to limit national security risks linked to critical communications and surveillance technologies.




700Credit Data Breach Exposes Personal Information of Over 5.6 Million Consumers

 

A massive breach at the credit reporting firm 700Credit has led to the leakage of private details of over 5.6 million people, throwing a new set of concerns on the risk of third-party security in the financial services value chain. The firm has admitted that the breach was a result of a supply chain attack on one of its third-party integration partners and did not originate from an internal breach.  

According to the revelations made, this breach has its roots going back to late October 2025, when 700Credit noticed some unusual traffic associated with an exposed API. The firm has more than 200 integration partners who are connected to consumers’ data through APIs. It has been found that one of these partners was compromised as early as July 2025, but this notification was not made to 700Credit, thus leaving an opportunity for hackers to gain unlawful access to an API used for fetching consumers’ credit details from this API connected environment.  

700Credit called this attack a "sustained velocity attack" that began October 25 and continued for over two weeks before being completely contained. Although the company was able to disable their vulnerable API once aware of the attack, attackers had already harvested a large chunk of customer information by exploiting this security hole. The attack is estimated to have compromised 20 percent of available information that was accessed through this vulnerability. 

The compromised information comprises highly sensitive personal information like names, physical addresses, dates of birth, as well as Social Security numbers. Although 700Credit asserted that their primary internal systems as well as login credentials as well as mode of payment are safe from any breach, security experts have indicated that the compromised information is sufficient for identity theft, financial fraud, as well as targeted phishing attacks. Consequently, individuals in the company’s database have been advised to exercise vigilance against any unsolicited messages, especially if they purportedly come from 700Credit or related entities.  

The Attorney General, Dana Nessel, issued a consumer alert warning people not to brush off the notifications received when a breach has occurred, but to be proactive about protecting themselves against fraud using the services of freezing their credit or monitoring their profiles for unusual activity due to the large-scale release of sensitive data that has happened previously. 

In reaction to the incident, 700Credit has already started notifying affected consumers of the breach as a gesture of goodwill, offering two years of complimentary credit monitoring service, as well as offering complimentary credit reports to affected consumers. The company has also partnered with the National Automobile Dealers Association to assist with breach notification with the Federal Trade Commission for a joint notification on affected dealerships. 

Law enforcement agencies have been notified of the breach as part of the continued investigations. This vulnerability highlights the increasing danger of the supply chain vulnerability, especially in companies which have extensive networks in handling personal data of consumers.

Inside the Hidden Market Where Your ChatGPT and Gemini Chats Are Sold for Profit

 

Millions of users may have unknowingly exposed their most private conversations with AI tools after cybersecurity researchers uncovered a network of browser extensions quietly harvesting and selling chat data.Here’s a reminder many people forget: an AI assistant is not your friend, not a financial expert, and definitely not a doctor or therapist. It’s simply someone else’s computer, running in a data center and consuming energy and water. What you share with it matters.

That warning has taken on new urgency after cybersecurity firm Koi uncovered a group of Google Chrome extensions that were quietly collecting user conversations with AI tools and selling that data to third parties. According to Koi, “Medical questions, financial details, proprietary code, personal dilemmas,” were being captured — “all of it, sold for ‘marketing analytics purposes.’”

This issue goes far beyond just ChatGPT or Google Gemini. Koi says the extensions indiscriminately target multiple AI platforms, including “Claude, Microsoft Copilot, Perplexity, DeepSeek, Grok (xAI) and Meta AI.” In other words, using any browser-based AI assistant could expose sensitive conversations if these extensions are installed.

The mechanism is built directly into the extensions. Koi explains that “for each platform, the extension includes a dedicated ‘executor’ script designed to intercept and capture conversations.” This data harvesting is enabled by default through hardcoded settings, with no option for users to turn it off. As Koi warns, “There is no user-facing toggle to disable this. The only way to stop the data collection is to uninstall the extension entirely.”

Once installed, the extensions monitor browser activity. When a user visits a supported AI platform, the extension injects a specific script — such as chatgpt.js, claude.js, or gemini.js — into the page. The result is total visibility into AI usage. As Koi puts it, this includes “Every prompt you send to the AI. Every response you receive. Conversation identifiers and timestamps. Session metadata. The specific AI platform and model used.”

Alarmingly, this behavior was not part of the extension’s original design. It was introduced later through updates, while the privacy policy remained vague and misleading. Although the tool is marketed as a privacy-focused product, Koi says it does the opposite. The policy admits: “We share the Web Browsing Data with our affiliated company,” described as a data broker “that creates insights which are commercially used and shared.”

The main extension involved is Urban VPN Proxy, which alone has around six million users. After identifying its behavior, Koi searched for similar code and found it reused across multiple products from the same publisher, spanning both Chrome and Microsoft Edge.

Affected Chrome Web Store extensions include:
  • Urban VPN Proxy – 6,000,000 users
  • 1ClickVPN Proxy – 600,000 users
  • Urban Browser Guard – 40,000 users
  • Urban Ad Blocker – 10,000 users
On Microsoft Edge Add-ons, the list includes:
  • Urban VPN Proxy – 1,323,622 users
  • 1ClickVPN Proxy – 36,459 users
  • Urban Browser Guard – 12,624 users
  • Urban Ad Blocker – 6,476 users
Despite this activity, most of these extensions carry “Featured” badges from Google and Microsoft. These labels suggest that the tools have been reviewed and meet quality standards — a signal many users trust when deciding what to install.

Koi and other experts argue that this highlights a deeper problem with extension privacy disclosures. While Urban VPN does technically mention some of this data collection, it’s easy to miss. During setup, users are told the extension processes “ChatAI communication” along with “pages you visit” and “security signals,” supposedly “to provide these protections.”

Digging deeper, the privacy policy spells it out more clearly: “‘AI Inputs and Outputs. As part of the Browsing Data, we will collect the prompts and outputs queried by the End-User or generated by the AI chat provider, as applicable.’” It also states plainly: “‘We also disclose the AI prompts for marketing analytics purposes.’”

The extensions, Koi warns, “remained live for months while harvesting some of the most personal data users generate online.” The advice is blunt: “if you have any of these extensions installed, uninstall them now. Assume any AI conversations you've had since July 2025 have been captured and shared with third parties.”

Askul Confirms RansomHouse Ransomware Breach Exposed 740,000 Records

 

Japanese e-commerce giant Askul Corporation confirmed that a ransomware attack carried out by the RansomHouse group led to the theft of about 740,000 customer records in October 2025. Askul, which is a major supplier of office supplies and logistics services owned by Yahoo! Japan, suffered a critical failure within their IT system due to the breach, forcing the company to shut down shipments to customers, including the popular retail chain Muji. 

Compromised data includes approximately 590,000 business customer service records, 132,000 individual customer records, 15,000 records of business partners (outsourcers, agents, suppliers), and about 2,700 records of executives and employees across group companies. 

Detailed information about the breach is not being disclosed by Askul to avoid further exploitation. The company is trying to individually contact affected customers and partners. It has reported the incident to Japan's Personal Information Protection Commission and put in place long-term monitoring to mitigate the risk of misuse. 

The RansomHouse group is known to conduct both data exfiltration and encryption operations, and it announced the breach on October 30, followed by two data leaks on November 10 and December 2. An Askul investigation found that the breach occurred due to compromised authentication credentials related to an outsourced partner administrator account that did not have multi-factor authentication (MFA). After accessing the systems, the attackers performed reconnaissance, gathered authentication information, disabled EDR software, and moved laterally between servers to gain privileged access. 

Several types of ransomware were deployed; some were even capable of bypassing the EDR signatures of the time. This resulted in widespread data encryption and systemic outages. Another step the attackers took was to clear the backup files to further impede recovery. Askul severed connectivity to infected networks, isolated affected systems, updated EDR signatures, and implemented MFA for all critical systems. 

As of mid-December, Askul continues to face disruptions in order shipping and is working to fully restore its systems. The financial impact of the attack has not yet been estimated, and the company has postponed its scheduled earnings report to allow for a thorough assessment.

GhostPairing Attack Puts Millions of WhatsApp Users at Risk

 


An ongoing campaign that aims to seize control of WhatsApp accounts by manipulating WhatsApp's own multi-device architecture has been revealed by cybersecurity experts in the wake of an ongoing, highly targeted attack designed to illustrate the increasing complexity of digital identity threats. 

Known as GhostPairing, the attack exploits the trust inherent in WhatsApp's system for pairing devices - a feature that allows WhatsApp Web users to send encrypted messages across laptops, mobile phones, and browsers by using the WhatsApp Web client. 

Through a covert means of guiding victims into completing a legitimate pairing process, malicious actors are able to link an attacker-controlled browser as a hidden companion device to the target account, without alerting the user or sending him/her any device notifications at all. 

The end-to-end encryption and frictionless cross-platform synchronization capabilities of WhatsApp remain among the most impressive in the industry, but investigators warn that these very strengths of the service have been used to subvert the security model, which has enabled adversaries to have persistent access to messages, media, and account controls.

Although the encryption remains intact in such a scenario technically, it will be strategically nullified if the authentication layer is compromised, allowing attackers to read and reply to conversations from within their own account. This effectively converts a feature that was designed to protect your privacy into an entry point for silent account takeovers, effectively converting a privacy-first feature into a security-centric attack.

Analysts have characterized GhostPairing as a methodical account takeover strategy that relies on WhatsApp’s legitimate infrastructure of device linkage as a means of obtaining access to accounts instead of compromising WhatsApp’s security through conventional methods of authentication. In this technique, users are manipulated socially so that they link an external device, under the false impression that they are completing a verification process. 

As a general rule, an attack takes place through messages appearing to come from trusted contacts, often compromised accounts, and containing links disguised as photos, documents, or videos. Once accessed by victims, these links lead them to fake websites meticulously modeled after popular social media platforms such as Facebook and WhatsApp, where allegedly the victim will be asked to enter his or her phone number as part of an authentication process. 

Moreover, the pages are designed to generate QR codes that are used to verify customer support, comply with regulations regarding KYC, process job applications, update KYC records, register promotional events, or recover account information. By scanning QR codes that mirror the format used by WhatsApp Web, users unintentionally link their accounts to those of attackers, not realizing they are scanning QR codes that are actually the same format used by WhatsApp Web. 

It is important to know that once the connection is paired, it runs quietly in the background, and the account owner does not receive an explicit login approval or security alert. Although WhatsApp’s encryption remains technically intact, the compromise at the device-pairing layer allows threat actors to access private communications in a way that effectively sidesteps encryption by allowing them to enter authenticated sessions from within their own account environment, even though WhatsApp’s encryption has remained unbroken technologically. 

The cybercriminals will then be able to retrieve historical chat data, track incoming messages in real time, view and transmit shared media — including images, videos, documents, and voice notes — and send messages while impersonating the legitimate account holder in order to take over the account. Additionally, compromised accounts are being repurposed as propagation channels for a broader range of targets, further enlarging the campaign's reach and scale. 

The intrusion does not affect normal app behavior or cause system instability, so victims are frequently unaware of unauthorized access for prolonged periods of time, which allows attackers to maintain persistent surveillance without detection for quite a while. 

The campaign was initially traced to users in the Czech Republic, but subsequent analysis has shown that the campaign's reach is much larger than one specific country. During their investigation, researchers discovered that threat actors have been using reusable phishing kits capable of rapid replication, which allows operations to scale simultaneously across countries, languages, and communication patterns. 

A victim's contact list is already populated with compromised or impersonated accounts, providing an additional layer of misplaced trust to the outreach, which is what initiates the attack chain. In many of these messages, the sender claims that they have found a photograph and invites their recipients to take a look at it through a link intentionally designed to look like the preview or media viewer for Facebook content. 

As soon as the link is accessed, users are taken to a fake, Facebook-branded verification page that requires them to authenticate their identity before they can view the supposed content. The deliberate mimicry of familiar interfaces plays a central role in lowering suspicions, thereby encouraging victims to complete verification steps with little hesitation, according to security analysts. 

A study published by Gen Digital's threat intelligence division indicates that the campaign is not relying on malware deployments or credential interceptions to execute. This malware manipulates WhatsApp's legitimate device-pairing system instead. 

As a consequence of the manipulation, WhatsApp allows users to link browsers and desktop applications together for the purpose of synchronizing messaging. Attackers can easily bind an unauthorized browser to an account by convincing the users to voluntarily approve the connection. In other words, they are able to bypass encryption by entering through a door of authentication that they themselves unknowingly open, rather than breaking it.

It has become increasingly apparent that threat actors are moving away from breaking encryption towards undermining the mechanisms governing access to it, as evidenced by GhostPairing. As part of this attack, people are using WhatsApp's unique feature: frictionless onboarding and the ability to link their devices to their account with just a phone number in order to extend your account to as many devices as they like. 

The simplicity of WhatsApp, often cited as a cornerstone of the company's global success, means that users don't have to enter usernames or passwords, reinforcing convenience, but inadvertently exposing more vulnerabilities to malicious use. WhatsApp's end-to-end encryption architecture further complicates things, since it provides every user with their own private key. 

Private cryptographic keys that are used to securely encrypt the content of the messages are stored only on the user's device, which theoretically should prevent eavesdropping unless an attacker is able to physically acquire the device or deploy malware to compromise it remotely if it can be accessed remotely. 

By embedding an attacker's device within an authenticated session, GhostPairing demonstrates that a social engineering attack can circumvent encryption without decrypting the data, but by embedding an attacker's device within a session in which encrypted content is already rendered readable, thus circumventing the encryption. 

Researchers have found that the technique is comparatively less scalable on platforms such as Signal, which supports only QR-based approvals for pairing devices, and this limitation has been noted to offer some protection against similar thematically driven device linking techniques. 

The analysts emphasize from a defensive standpoint that WhatsApp provides users with an option to see what devices are linked to them through their account settings section titled Linked Devices. In this section, unauthorized connections can, in principle, be identified, as well. The attackers may be able to establish silent persistence through fraudulently linking devices, but they cannot remove or revoke their device access themselves, since the primary registered device remains in charge of revocation. 

The addition of two-step PIN verification as a mitigation, which prevents attackers from making changes to an account's primary email address, adds additional hurdles for attackers. However, this control does not hinder access to messages once pairing has been completed. Especially acute consequences exist for organizations.

A common way for employees to communicate is via WhatsApp, which can sometimes lead to informal group discussions involving multiple members - many of which are conducted outside of formal documentation and oversight. It has been recommended by security teams to assume the existence of these shadow communication clusters, rather than treat them as exceptions, but as a default risk category. 

A number of industry guidelines (including those that have prevailed for the past five years) emphasize the importance of continued user awareness, and in particular that users should be trained in identifying phishing attempts, unsolicited spam, and the like, even if the attempt seems to come from well-known contacts or plausible verification attempts. 

The timing of the attack is difficult to determine when viewed from a broader perspective, but there are no signs that there is any relief. According to a report published by Meta in April of this year, millions of WhatsApp users had their mobile numbers exposed, and Meta confirmed earlier this year that the Windows desktop application had security vulnerabilities.

In parallel investigations, compromised Signal-based messaging tools have also been found to have been compromised by political figures and senior officials, confirming that cross-platform messaging ecosystems, regardless of whether or not they use encryption strength, are now experiencing identity-layer vulnerabilities that must be addressed with the same urgency as network or malware attacks have been traditionally addressed.

The GhostPairing campaign signals a nuanced, yet significant change in techniques for gaining access to accounts, which reflects a longer-term trend in which attackers attempt to gain access to identities through behavioral influence rather than technical subversion. 

Threat actors exploit WhatsApp's ability to link devices exactly as it was intended to work, whereas they decrypt the secure communication or override authentication safeguards in a way that seems to be more effective. 

They engineer moments of cooperation through the use of persuasive, familiar-looking interfaces. A sophisticated attack can be carried out by embedding fraudulent prompts within convincingly branded verification flows, which allows attackers to secure enduring access to victim accounts with very little technical skill, relying on legitimacy by design instead of compromising the systems.

There is a warning from security researchers that this approach goes beyond regional boundaries, as scalable phishing kits and interface mimicry enable multiple countries to deploy it across multiple languages. 

A similar attack can be attempted on any digital service that allows set-up via QR codes or numeric confirmation steps, irrespective of whether the system is built on a dedicated platform or not. This has an inherent vulnerability to similar attacks, especially when human trust is regarded as the primary open-source software vulnerability. 

Analysts have emphasized that the attack's effectiveness stems from the convergence of social engineering precision with permissive multi-device frameworks, so that it allows adversaries to penetrate encrypted environments without any need to break the encryption at all — and to get to a session in which all messages have already been decrypted for the authenticated user. 

It is encouraging to note that the defensive measures necessary to combat such threats are still relatively straightforward. The success rate of such deception-driven compromises could be significantly reduced if regular device hygiene audits, greater user awareness, and modest platform refinements such as clearer pairing alerts and tighter device verification constraints were implemented. 

Especially for organizations that are exposed to undocumented employee group chats that operate outside the formal oversight of the organization are of crucial importance for reducing risk. User education and internal reporting mechanisms are crucial components of mitigating risks. 

Amidst the rapid increase in digital interactions, defenders are being urged to treat vigilance in the process not as an add-on practice, but rather as a foundational layer of account security for the future. GhostPairing's recent appearance serves to serve as a reminder that the security of modern communication platforms is no longer solely defined by encryption standards, rather by the resilience of the systems that govern access to them, and that the security of these systems must be maintained at all times.

It is evident that as messaging ecosystems continue to grow and integrate themselves into everyday interactions — such as sharing personal media or coordinating workplace activities — the balance between convenience and control demands renewed scrutiny. 

It is strongly advised for users to follow regular digital safety practices, such as verifying unexpected links even if they are sent by familiar contacts, regularly auditing linked devices, and activating two-factor safeguards, such as two-step PIN verification, to ensure that their data is secure.

As organizations become increasingly aware of threats beyond the perimeter of their organizations, they should cultivate a culture of internal threat reporting that ensures that unofficial communication groups are acknowledged in risk models rather than ignored. 

Security teams are advised to conduct phishing awareness drills, make device-pairing alerts more clear at the platform level, and conduct periodic access hygiene reviews of widely used communication channels, such as encrypted messengers, for a number of reasons. 

With the incidence of identity-layer attacks on the rise, researchers emphasize that informed users remain the best countermeasure against silent account compromise - making awareness the best strategic strategy in the fight against silent account compromises, not only as a reactive habit, but as a long-term advantage.

NIST and MITRE Launch $20 Million AI Research Centers to Protect U.S. Manufacturing and Critical Infrastructure

 

The National Institute of Standards and Technology (NIST) has announced a new partnership with The MITRE Corporation to establish two artificial intelligence–focused research centers under a $20 million initiative. The effort will explore advanced AI applications, with a strong emphasis on how emerging technologies could reshape cybersecurity for U.S. critical infrastructure.

According to NIST, one of the new centers will concentrate on advanced manufacturing, while the other — the AI Economic Security Center to Secure U.S. Critical Infrastructure from Cyberthreats — will directly address the protection of essential services such as water, power, internet and other foundational systems against AI-driven cyber risks. The centers are expected to accelerate the creation and deployment of AI-enabled tools, including agentic AI technologies.

“The centers will develop the technology evaluations and advancements that are necessary to effectively protect U.S. dominance in AI innovation, address threats from adversaries’ use of AI, and reduce risks from reliance on insecure AI,” spokesperson Jennifer Huergo wrote in an agency release.

These initiatives are part of a broader federal strategy to establish AI research hubs at NIST, some of which were launched prior to the Trump administration. Earlier this year, the White House revamped the AI Safety Institute, renaming it the Center for AI Standards and Innovation, reflecting a wider policy shift toward global competitiveness — particularly with China — rather than a narrow focus on AI safety. Looking ahead, NIST plans to fund another major effort: a five-year, $70 million AI for Resilient Manufacturing Institute designed to strengthen manufacturing and supply chain resilience through AI integration.

Federal officials and industry leaders believe increased government backing for AI research will help drive innovation across U.S. industries. Huergo noted that NIST “expects the AI centers to enable breakthroughs in applied science and advanced technology.”

Acting NIST Director Craig Burkhardt added that the centers will jointly “focus on enhancing the ability of U.S. companies to make high-value products more efficiently, meet market demands domestically and internationally, and catalyze discovery and commercialization of new technologies and devices.”

When asked about MITRE’s role, Brian Abe, managing director of MITRE’s national cybersecurity division, said the organization is committing its full resources to the initiative, with the aim of delivering measurable improvements to U.S. manufacturing and critical infrastructure cybersecurity within three years.

“We will also leverage the full range of MITRE’s lab capabilities such as our Federal AI Sandbox,” said Abe. “More importantly, we will not be doing this alone. These centers will be a true collaboration between NIST and MITRE as well as our industry partners.”

Support for the initiative has been widespread among experts, many of whom emphasize the importance of collaboration between government and private industry in securing AI systems tied to national infrastructure. Over the past decade, sectors such as energy and manufacturing have faced growing threats from ransomware, foreign cyber operations and other digital attacks. The rapid advancement of large language models could further strain already under-resourced IT and security teams.

Randy Dougherty, CIO of Trellix, said the initiative targets some of the most critical risks facing AI adoption today. By prioritizing infrastructure security, he noted, “NIST is tackling the ‘high-stakes’ end of the AI spectrum where accuracy and reliability are non-negotiable.”

Industry voices also stressed that the success of the centers will depend on active participation from the sectors they aim to protect. Gary Barlet, public sector chief technology officer at Illumio, highlighted water and power systems as top priorities, emphasizing the need to secure their IT, operational technology and supply chains.

Barlet cautioned that meaningful progress will require direct involvement from infrastructure operators themselves. Without their engagement, he said, translating research into practical, deployable solutions will be difficult — and accountability will ultimately fall on those managing essential services.

“Too often, these centers are built by technologists for technologists, while the people who actually run our power grids, water systems, and other critical infrastructure are left out of the conversation,” Barlet said.

Google and Apple Deploy Rapid Security Fixes Following Zero-Day Attacks


 

It has been revealed that a set of advanced zero-day vulnerabilities, utilizing which a highly targeted hacking campaign was targeting private individuals, has been leveraged by Apple as an emergency security patch. Several weeks ago, in an official security advisory, the company said it believed the flaws had been weaponized, and were being used to attack a selective group of specific individuals using iOS versions prior to iOS 26 through an exceptionally sophisticated attack. 

In the list of vulnerabilities, CVE-2025-43529 stands out as a critical vulnerability that can be exploited remotely by WebKit, the open-source browser engine that forms the basis for Safari and supports a variety of core applications like Mail and the App Store, as well as supporting remote code execution. According to cybersecurity platform BleepingComputer, the vulnerability can be triggered whenever a device processes malicious web content, potentially giving attackers access to arbitrary code. 

Upon confirmation that the vulnerability was discovered by a collaborative security review and that the vulnerability was attributed to Google Threat Analysis Group, the vulnerability was deemed to be extremely serious, as WebKit is widely integrated throughout both macOS and iOS ecosystems and is also used as a basis for third-party applications such as Chrome on iOS, underscoring its severity. 

The company has urged all users to update their devices immediately, stating that the patches were created to neutralize active threats that had already circulated in the wild. According to the security advisory, the incident goes beyond the disclosure of a standard vulnerability, as it appears that it was the result of a highly precise and technically advanced exploitation effort directed at a number of individuals prior to the release of patches in this case. 

In an acknowledgement that Apple acknowledged awareness that at least one of these critical vulnerabilities may have already been exploited in an "extremely sophisticated attack" against carefully selected targets, Apple confirmed that two critical flaws affecting iPhones and iPads running iOS versions older than iOS 26 had already been fixed. 

The term zero-day exploit is used in cybersecurity terminology to refer to previously undisclosed software flaws which are actively exploited before the developers have had the opportunity to formulate defenses. It is often the case that the tactics employed by these operations are correlated with those of well-resourced threat actors, such as government-linked groups and commercial surveillance companies. 

Historically, malware frameworks developed by companies like NSO Group and Paragon Solutions have been linked to intrusions involving journalists, political dissenters, and human rights advocates, as well as many other types of malware. In response to both Apple and Google's announcements of emergency updates across their respective ecosystems, the scope of the alert grew dramatically. As a result, millions of iPhone, iPad, Mac, and Google Chrome users, particularly in New Delhi, are being urged to be on the lookout for cyber attacks as the threat grows. 

Google has also confirmed an active exploit of a Chrome vulnerability and has issued a priority patch that users should upgrade immediately, citing the browser's vast global footprint as a significant risk. Apple’s Security Engineering division and Google’s Threat Analysis Group have independently identified the flaw, a group that has been identified for its involvement in state-aligned intrusion campaigns and commercial spyware activity, and this has contributed to further strengthening the conclusion that the attack was carried out by elite surveillance operators, rather than opportunistic cybercriminals. 

It has been suggested by industry experts that even a single unpatched vulnerability in a platform like Chrome could expose millions of devices if it is not fixed immediately, so it's imperative to update as soon as possible, and it's a good reminder that the failure to update could have serious privacy and security implications. There has been an acknowledgement from Apple of the fact that recently patched security flaws could have been used to exploit highly targeted intrusion attempts affecting legacy iOS versions. 

The fixes have also been extended to a number of older iPad models and the iPhone 11, in keeping with Apple's long-standing policy that it doesn't release granular technical information, reiterating that it does not comment on ongoing security investigations in public. These patches were released in conjunction with broader ecosystem updates that covered WebKit as well as Screen Time and several other system-level components, reinforcing the fact that the vulnerabilities are cross-functional in nature. 

Google's and Apple's updates are most closely aligned in terms of technical issues. In fact, both companies have now corrected the CVE-2025-14174 flaw. It was originally addressed in Chrome Stable releases earlier in the month, and has been categorized as a serious memory access problem in ANGLE, a graphics abstraction layer which is also used by WebKit, which gives a better picture of the parallel impact on Apple platforms. 

It was later formally identified as an out-of-bounds memory access vulnerability in ANGLE that was the cause of this vulnerability. Google and the National Vulnerability Database confirmed that exploits had already been detected in the wild and that exploit activity had already been detected. 

According to Apple, in its own advisory, the same CVE is associated with a WebKit memory corruption condition triggered by maliciously crafted web content, further implying precise targeting rather than indiscriminate exploitation in the case of this vulnerability. 

Security researchers noted that the near-simultaneous disclosures reflect a growing risk caused by shared open-source dependencies across major consumer platforms, and that both companies responded with emergency updates within days of each other. SoCRadar, one of the leading sources of information on security, highlighted the strategic significance of this flaw by pointing out that it is present in both Chrome and WebKit environments, which is a clear example of indirect cross-vendor exposure as a result of its dual presence. 

It has been recommended by security analysts and enterprise security teams that the issue be remedied quickly, as it can leave devices vulnerable to post-exploitation instability, memory compromise, and covert code execution if the patch is not deployed in a timely fashion. 

As a result of the security advisory, organizations were advised to prioritize updating devices that are used by high-risk profiles, enforce compliance with endpoint management frameworks, monitor abnormal browser crashes or process anomalies, and limit access to unverified web content in order to reflect the seriousness of vulnerabilities that have already been identified as being exploited by active parties. 

On Wednesday, Google released a security update for Chrome without making any public announcement, stating only that investigations and remediation efforts were still in progress despite the vulnerability. The phrase "under coordination," which is used to indicate that investigations and remediation efforts were still underway, does not convey much information to the public. 

Several days after Apple released its own security advisory, the company quietly revised its internal patch documentation, intimating that there was a technical intersection between the two organizations' parallel assessments. Historically, this vulnerability has been attributed to Apple's security engineering division, which in collaboration with Google's Threat Analysis Group (TAG), has been identified as a shared vulnerability, officially titled CVE-2025-14174.

It is a highly specialized unit that is primarily tasked with identifying state-aligned cyber operations and commercial spyware networks instead of typical malware campaigns. The nature of the attribution, even though neither company has published extensive technical breakdowns, has reinforced industry consensus that this exploit aligns more closely with spyware-grade surveillance activities than with broad, untargeted cybercrime.

Both firms have also experienced an increase in the number of zero-day attacks resulting from the dual disclosure, which reflects the sustained adversarial interest in browsers and mobile operating systems as strategic attack surfaces. 

As of now, Apple has mitigated nine vulnerabilities that have been confirmed as having active exploitation chains by 2025, whereas Google has resolved eight Chrome zero-days in the same period—an unusually concentrated cadence that security researchers believe reflects an exceptionally well-resourced and persistent threat ecosystem that continues to treat consumer platforms as valuable infrastructure for precision intrusions and intelligence collection. 

It highlights one of the fundamental aspects of modern cybersecurity: software ecosystems have become increasingly interconnected, and a vulnerability in one widely used component can spread across competing platforms before users even realize the problem exists. However, despite the fact that emergency patches have curtailed active exploitation, the incident reflects a growing awareness of zero-day threats and how they often unfold silently, leaving very little room for delay in responding.

A number of security experts have pointed out that timely updates are among the most effective means of preventing complex exploit chains, which even advanced monitoring tools are struggling to detect in the early stages when they may be unable to detect them. 

The risk of consumer behavior can be significantly reduced by managing automatic updates, limiting exposure to untrusted web links, and monitoring unusual browser behavior. It is imperative for enterprises to enforce compliance through centralized device management, strengthen endpoint visibility, and correlate cross-vendor vulnerability disclosures in order to anticipate indirect exposure from shared dependencies that organizations must take into consideration.

The experts also recommend that periodic device audits be conducted, high-risk users should be protected more, browser isolations should be implemented, and threat intelligence feeds should be implemented to detect anomalies early on. Although it was severe, the breach has resulted in an increase in collaboration within security research units, demonstrating that when deployed quickly and strategically, coordinated defenses can outperform even the most elaborate intrusion attempts.

Telegram-Based Crypto Scam Networks Are Now Larger Than Any Dark Web Market in History

 



For years, illegal online marketplaces were closely linked to the dark web. These platforms relied on privacy-focused browsers and early cryptocurrencies to sell drugs, weapons, stolen data, and hacking tools while remaining hidden from authorities. At the time, their technical complexity made them difficult to track and dismantle.

That model has now changed drastically. In 2025, some of the largest illegal crypto markets in history are operating openly on Telegram, a mainstream messaging application. According to blockchain intelligence researchers, these platforms no longer depend on sophisticated anonymity tools. Instead, they rely on encrypted chats, repeated channel relaunches after bans, and communication primarily in Chinese.

Analysis shows that Chinese-language scam-focused marketplaces on Telegram have reached an unprecedented scale. While enforcement actions earlier this year temporarily disrupted a few major platforms, activity quickly recovered through successor markets. Two of the largest currently active groups are collectively processing close to two billion dollars in cryptocurrency transactions every month.

These marketplaces function as service hubs for organized scam networks. They provide money-laundering services, sell stolen personal and financial data, host fake investment websites, and offer digital tools designed to assist fraud, including automated impersonation technologies. Researchers have also flagged listings that suggest serious human exploitation, adding to concerns about the broader harm linked to these platforms.

Their rapid growth is closely connected to large-scale crypto investment and romance scams. In these schemes, victims are gradually manipulated into transferring increasing amounts of money to fraudulent platforms. Law enforcement estimates indicate that such scams generate billions of dollars annually, making them the most financially damaging form of cybercrime. Many of these operations are reportedly run from facilities in parts of Southeast Asia where trafficked individuals are forced to carry out fraud under coercive conditions.

Compared with earlier dark web marketplaces, the difference in scale is striking. Previous platforms processed a few billion dollars over several years. By contrast, one major Telegram-based marketplace alone handled tens of billions of dollars in transactions between 2021 and 2025, making it the largest illicit online market ever documented.

Telegram has taken limited enforcement action, removing some large channels following regulatory scrutiny. However, replacement markets have repeatedly emerged, often absorbing users and transaction volumes from banned groups. Public statements from the platform indicate resistance to broad bans, citing privacy concerns and financial freedom for users.

Cryptocurrency infrastructure also plays a critical role in sustaining these markets. Most transactions rely on stablecoins, which allow fast transfers without exposure to price volatility. Analysts note that Tether is the primary stablecoin used across these platforms. Unlike decentralized cryptocurrencies, Tether is issued by a centralized company with the technical ability to freeze funds linked to criminal activity. Despite this capability, researchers observe that large volumes of illicit transactions continue to flow through these markets with limited disruption. Requests for comment sent to Tether regarding its role in these transactions did not receive a response at the time of publication.

Cybercrime experts warn that weak enforcement, fragmented regulation, and inconsistent platform accountability have created conditions where large-scale fraud operates openly. Without coordinated intervention, these markets are expected to continue expanding, increasing risks to users and the global digital economy.