Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Global Surge in Military Grade Spyware Puts Personal Smartphones at Risk


 

Global cybersecurity discourse is emerging with a growing surveillance threat under the surface as the UK's top cyber authority issues a stark assessment of the unchecked proliferation of commercial spyware capabilities. Initially restricted to tightly regulated law enforcement use, advanced intrusion tools are now widely used across more than 100 countries, able to remotely compromise smartphones, bypass encrypted communications, and covertly activate device sensors. 

NSO Group and an increasingly opaque ecosystem of competitors are driving this rapid expansion, signaling the shift from targeted investigative use to a wider landscape of state-aligned digital intrusion, a shift in which state-aligned cyberattacks are becoming increasingly commonplace. 

In spite of their increasing accessibility and operational stealth, enterprises and operators of critical national infrastructure are not adequately prepared for the scale and sophistication of these threats. There is an evolving threat landscape supporting it, which is supported by the increasing sophistication of modern spyware frameworks, which leverage "zero-click" exploitation chains to gain unauthorized access without requiring the user's involvement. 

NSO Group's Pegasus platform and Paragon's Graphite platform function as highly advanced intrusion suites. They exploit latent vulnerabilities within mobile operating systems to extract sensitive communications, media, geolocation information, and other artifacts through forensic minimalism. 

The commercial dynamics underpinning this ecosystem demonstrate the magnitude of the challenge as well as its persistence. As part of the United States entity list, the Israeli developer NSO Group, widely associated with high-end surveillance tooling, was listed in 2021 for its supply of technologies to foreign governments. These technologies were then utilized to target a wide range of individuals, including government officials, journalists, business leaders, academicians, and diplomats. 

In defending its claims that such capabilities serve legitimate anti-terrorism and law enforcement purposes, the company asserts that it lacks direct visibility into operational use, while retaining the right to terminate client relationships in instances of verified misuse. 

In spite of the rapid expansion of the vendor landscape, NSO Group represents only one node within it. According to industry observers, including Casey, the sector is extremely profitable and is undergoing rapid growth. There are currently dozens of firms offering comparable capabilities in this market. 

According to estimates, more than 100 countries have procured mobile spyware, an increase over earlier assessments, which indicated deployment across more than 80 national jurisdictions. Along with offering a cost-effective shortcut to the development of capabilities that would otherwise require years of development, commercial intrusion platforms offer a fast and easy means for states lacking indigenous cyber expertise.

In addition, the National Cyber Security Centre noted previously that, despite the fact that these tools are intended for law enforcement purposes, there is credible evidence that they have been used on a widespread basis against journalists, human rights defenders, political dissidents, and foreign officials with thousands of individuals being targeted annually. 

Several leaked toolkits, including DarkSword, demonstrate the dispersal of capabilities once restricted to state intelligence agencies into less controlled environments, making it possible for state-aligned and criminal actors to launch attacks by utilizing vectors as inconspicuous as compromised web sessions on unpatched iOS devices. In addition to theoretical risk models, operational exploits are being actively employed against targets who often assume device-level security as the basis of their attack. 

A notable increase in the victim profile is that it includes corporate executives, financial professionals, and organizations dealing with valuable information, as well as journalists and political dissidents. It was highlighted by Richard Horne, the director of the UK's National Cyber Security Centre, that there still remains a significant gap in industry readiness. 

Many enterprises underestimate the capability and operational maturity of these surveillance capabilities. Essentially, this shift illustrates the democratization of offensive cyber tools, where sophisticated surveillance, once monopolized by a few intelligence agencies, is now available to a broader range of state actors lacking native cyber expertise. 

As a result, these capabilities are increasingly available economically and they are unintentionally disseminated, which fundamentally alters the threat equation. Through the transition from tightly controlled assets to commercially traded products, advanced surveillance tools become increasingly difficult to contain as they are propagated through illicit channels, including corrupt procurement practices, insider exfiltration, and secondary resale markets. 

In the wake of this leakage, non-state actors, including organized criminal networks, have acquired capabilities that were previously available only to sovereign intelligence operations. The proliferation of state-linked campaigns, including those attributed to China and focused on large-scale data exfiltration, illustrates the use of such tools not only for immediate intelligence gain, but also to establish strategic prepositioning for future geopolitical conflicts. 

Traditional device-based safeguards and consumer privacy controls are only marginally effective against adversaries equipped with exploit chains developed specifically to circumvent them. International efforts to regulate and oversee exports are gaining momentum, but operational reality suggests that containment may already lag behind proliferation, which enables a significant expansion of attack surfaces across both civilian and enterprise digital environments. 

The convergence of commercial availability, technical sophistication and weak oversight has led to the normalization of capabilities that were once considered exceptional. These developments illustrate a structural shift in the cyber threat environment. 

In conjunction with the widespread adoption of such tools, and their continual evolution and leakage, there is an ongoing need for public and private sectors to assess their security assumptions at a fundamental level. There is no longer a limited need to defend against isolated intrusions for enterprises, critical infrastructure operators, and individual users, but rather to navigate a complex ecosystem where highly advanced surveillance techniques are frequently accessible and increasingly resemble legitimate activity. 

In the absence of strengthened international coordination, enforceable controls, and a corresponding increase in defensive maturity, a continued erosion of digital trust is likely, resulting in compromise becoming not an anomaly, but an expected condition of operating within a hyperconnected environment.

AI Models Surpass Doctors in Emergency Diagnosis, Harvard Study Finds

 




A contemporary study conducted by researchers at Harvard University has revealed that advanced artificial intelligence systems are now capable of exceeding human doctors in both diagnosing medical conditions and determining treatment strategies, including in fast-paced and high-stakes emergency room environments. The research specifically accentuates the potential capabilities of modern AI systems in handling complex clinical reasoning tasks that were traditionally considered exclusive to trained physicians.

The findings, published in the peer-reviewed journal Science, are based on a controlled comparison between OpenAI o1 and experienced attending physicians. To ensure realistic testing conditions, the study used 76 actual emergency department cases sourced from Beth Israel Deaconess Medical Center. These cases were evaluated across multiple stages of the diagnostic process, allowing researchers to assess performance under varying levels of available patient information.

At the earliest stage of patient assessment, commonly referred to as initial triage, where clinicians typically have only limited details about a patient’s condition, the AI model demonstrated a notable advantage. It was able to correctly identify either the exact diagnosis or a closely related condition in 67.1 percent of the cases. In comparison, the two physicians involved in the study achieved accuracy rates of 55.3 percent and 50 percent respectively. This suggests that even with minimal data, the AI system was more effective at narrowing down potential diagnoses.

As the diagnostic process progressed and additional clinical information became available during the emergency room evaluation phase, the model’s performance improved further. Its diagnostic accuracy increased to 72.4 percent, reflecting its ability to refine its conclusions with more context. The physicians also showed improvement at this stage, but their accuracy remained lower, at 61.8 percent and 52.6 percent. This stage is particularly important as it mirrors real-world conditions where doctors continuously update their assessments based on new findings.

In the final phase of care, when patients were admitted either to general hospital wards or intensive care units, the AI model continued to outperform its human counterparts. It achieved an accuracy rate of 81.6 percent, compared to 78.9 percent and 69.7 percent for the physicians. Although the performance gap narrowed slightly at this stage, the AI still maintained a measurable edge, indicating consistency across the full diagnostic timeline.

Beyond identifying illnesses, the study also evaluated how effectively the AI system could design clinical management plans. This included decisions such as selecting appropriate medications, including antibiotics, as well as handling complex and sensitive scenarios like end-of-life care planning. Across five evaluated case studies, the AI achieved a median performance score of 89 percent. In contrast, physicians scored significantly lower, averaging 34 percent when relying on traditional clinical resources and 41 percent when supported by GPT-4. This underlines a substantial gap in structured decision-making support.

The researchers acknowledged that while integrating AI into clinical workflows is often viewed as a high-risk approach due to patient safety concerns, its potential benefits are significant. They noted that wider adoption of such systems could help reduce diagnostic errors, minimize treatment delays, and address disparities in access to healthcare services. These factors collectively contribute to both improved patient outcomes and reduced financial strain on healthcare systems.

At the same time, the study emphasizes that current AI systems are not without limitations. Clinical medicine involves more than text-based data. Doctors routinely rely on non-verbal and non-textual cues, such as observing a patient’s physical discomfort, interpreting imaging results, and making judgment calls based on experience. These aspects are not fully captured by existing AI models, which means human expertise remains essential.

The authors further concluded that large language models have now surpassed many traditional benchmarks used to measure clinical reasoning abilities. However, they stress the urgent need for more detailed research, including real-world clinical trials and studies focused on human-AI collaboration, to determine how these systems can be safely and effectively integrated into healthcare settings.

In comments shared with The Guardian, lead researcher Arjun Manrai clarified that the findings should not be interpreted as suggesting that AI will replace doctors. Instead, he described the results as evidence of a major technological shift that is likely to transform the medical field in the coming years.

From a macro industry perspective, this study reflects a developing trend in which AI is increasingly being used to augment clinical decision-making. However, experts continue to caution that challenges such as data bias, accountability, regulatory oversight, and patient trust must be addressed before such systems can be widely deployed. The future of healthcare, therefore, is likely to involve a collaborative model where AI amplifies efficiency and accuracy, while human doctors provide critical judgment, ethical oversight, and patient-centered care.

Claude Desktop Silently Alters Browser Settings, Even on Uninstalled Browsers

 

Claude Desktop, Anthropic’s standalone AI app for macOS, has come under fire for quietly altering browser‑level settings on users’ machines—even when they have never installed or used certain browsers. Security and privacy researchers have found that the application drops browser‑configuration files across system‑wide directories, effectively pre‑authorizing future browser‑extension links between Claude and Chromium‑based browsers such as Chrome, Edge, Brave, Opera, and others.

Modus operandi 

Upon installation, Claude Desktop generates a Native Messaging manifest and helper binary that register Claude as a trusted “browser host” for several specific Chrome extension IDs. This manifest is placed inside browser‑host folders for multiple Chromium‑based browsers, including some a user may never have installed, meaning a future browser install could immediately grant Claude broad access to page content, form data, and session activity. Anthropic frames this as part of its “agentic” features that let the app automate tasks and interact with the web, but the lack of an explicit opt‑in notification has raised red flags. 

The biggest concern is that these configuration files persist beyond the scope of browsers a user actually runs. Even if a person never uses Chrome or a given Chromium browser, the manifest can already be waiting in the system’s browser‑host directories, pre‑staging a bridge that activates once a corresponding browser and Claude extension are installed. Because the desktop app rewrites these files on every launch, deleting them manually does not permanently remove the hooks unless Claude Desktop itself is uninstalled. 

Privacy and legal reactions 

Privacy experts and commentators have likened this behavior to “spyware‑like” activity, arguing that silently creating browser‑level hooks without clear consent violates the spirit, if not the letter, of privacy regulations such as the EU ePrivacy Directive. Alexander Hanff, a prominent privacy consultant, has explicitly labeled Claude Desktop’s behavior “spyware” and questioned how much of this browser integration is actually documented and disclosed to end users. Critics stress that such integrations should be opt‑in and transparent, rather than buried in vague terms‑of‑service language most users never read. 

For macOS users who have installed Claude Desktop, experts recommend reviewing whether they actually need the browser‑integration features and, if not, uninstalling the app entirely to remove lingering manifest files and host binaries. Some guides suggest manually cleaning native‑messaging‑host folders for various Chromium browsers and then restarting the browser after removal, although this is only effective if the desktop app is also gone. Until Anthropic adds clearer, upfront consent prompts and the option to disable or remove these hooks, users concerned about privacy should treat Claude Desktop’s browser integration as a potential risk and handle it accordingly.

npm Supply Chain Attack Spreads Worm Malware Stealing Developer Secrets Across Compromised Packages

 

Worry grows within the cybersecurity community following discovery of a fresh supply chain threat aimed at the npm platform, where self-replicating malicious code infiltrates public software libraries to harvest confidential information from coders. Though broad consumer impact seems minimal, investigators at Socket and StepSecurity confirm the assault specifically targets niche development setups - environments often overlooked in typical breach patterns. 

Detection came after unusual network activity flagged automated systems, leading analysts to trace payloads back to tampered dependencies uploaded under legitimate project names. Unlike older variants that rely on user interaction, this version activates silently once installed, transmitting credentials to remote servers without visible signs. Researchers emphasize the sophistication lies not in complexity but timing: attacks unfold during build processes, evading standard runtime checks. 

From initial samples, it appears attackers maintain persistence by chaining exploits across multiple packages. Investigation continues into whether source repositories were breached directly or if hijacked maintainer accounts allowed upload privileges. Not far behind the initial breach, several packages tied to Namastex Labs began showing suspicious behavior. One after another, altered forms of @automagik/genie, pgserve, and similar tools appeared online without warning. 

What started as isolated reports now points to a wider pattern unfolding quietly. Though some tainted releases have been pulled, fresh variants continue turning up unexpectedly. Danger comes from how the code spreads itself automatically. Right after a package installs, it acts like a worm - starting fast, grabbing key details from the system it hits. Things such as API tokens show up on the list, along with SSH keys, cloud login info, and hidden codes used in software build tools, containers, or AI setups. 

Off it goes, sending what it finds to servers run by attackers. Despite lacking conclusive proof, analysts observe patterns matching past operations tied to TeamPCP. Similarities emerge in how malware activates upon installation, grabs login details, and uses distributed infrastructure for spreading code and storing stolen data. What makes this malware more than just a thief is how it pushes outward without pause. 

Once inside, it hunts for npm login details and identifies which libraries the developer can upload. Harmful scripts are then inserted and republished, turning trusted tools into hidden entry points. If Python credentials appear, the same process spreads into PyPI. Not just traditional systems are at risk - crypto-linked holdings face exposure too, with data targeted from tools like MetaMask and Phantom. One weak spot in a developer’s setup can ripple outward, showing how quickly risks spread across software ecosystems.

Hackers Target Cloud Apps Using Phone Scams and Login Tricks



Cybersecurity researchers have identified two threat groups that are executing fast-moving attacks almost entirely within software-as-a-service environments, allowing them to operate with very little visible trace of intrusion.

The groups, tracked as Cordial Spider and Snarky Spider, are also known by multiple alternate identifiers across different security vendors. Investigations show that both groups are involved in high-speed data theft followed by extortion attempts, and their methods show a strong overlap in how operations are carried out. Analysts assess that these groups have been active since at least October 2025. One of them is believed to be composed of native English speakers and is linked to a cybercrime network widely referred to as “The Com.”

According to findings from CrowdStrike, these attackers primarily rely on voice phishing, also known as vishing, to initiate their intrusions. In these cases, individuals are contacted and guided toward fraudulent login pages that are designed to imitate single sign-on systems. These pages act as adversary-in-the-middle setups, meaning they intercept and capture authentication data, including login credentials and session details, as the victim enters them. Once this information is obtained, attackers immediately use it to access SaaS applications that are connected through single sign-on integrations.

Researchers explain that the attackers deliberately operate within trusted SaaS platforms to avoid raising suspicion. Because their activity takes place inside legitimate services already used by organizations, their presence generates fewer detectable signals. This allows them to move quickly from initial compromise to data access. The combination of speed, targeted execution, and reliance on SaaS-only environments makes it harder for defenders to monitor and respond effectively.

Earlier research published in January 2026 by Mandiant revealed that these attack patterns represent a continuation of tactics seen in extortion-focused campaigns linked to the ShinyHunters group. These operations involve impersonating IT staff during phone calls to build trust with victims, then directing them to phishing pages in order to collect both login credentials and multi-factor authentication codes.

More recent analysis from Palo Alto Networks Unit 42 and the Retail & Hospitality ISAC indicates, with moderate confidence, that one of the identified clusters is associated with The Com network. These attacks rely heavily on living-off-the-land techniques, where attackers use legitimate system tools instead of introducing malware. They also make use of residential proxy networks to mask their real geographic location and to evade basic IP-based security filtering systems.

Since February 2026, activity linked to one of these clusters has been directed toward organizations in the retail and hospitality sectors. The attackers combine vishing calls, often impersonating IT help desk personnel, with phishing websites designed to capture employee credentials.

Once access is established, the attackers take steps to maintain long-term control. They register a new device within the compromised account to ensure continued access, and in many cases remove previously registered devices. After doing so, they modify email settings by creating inbox rules that automatically delete notifications related to new device logins or suspicious activity, preventing the legitimate user from being alerted.

Following initial access, the attackers shift their focus toward accounts with higher privileges. They collect internal information, such as employee directories, to identify individuals with elevated access and then use further social engineering techniques to compromise those accounts as well. With increased privileges, they move across SaaS platforms including Google Workspace, HubSpot, Microsoft SharePoint, and Salesforce, searching for sensitive documents and business-critical data. Any valuable information is then exfiltrated to infrastructure controlled by the attackers.

Researchers note that in many observed cases, the stolen credentials provide access to the organization’s identity provider, which acts as a central authentication system. This creates a single entry point into multiple SaaS applications. By exploiting the trust relationships between the identity provider and connected services, attackers are able to move across the organization’s cloud ecosystem without needing to compromise each application separately. This allows them to access multiple systems using a single authenticated session.


CISA Highlights CVE-2026-31431 as an Active Linux Root Exploitation Risk


 

Several vulnerabilities in the Linux kernel have been recently disclosed that have attracted heightened scrutiny from the cybersecurity community, following evidence that they can be exploited to obtain full root-level control across a wide range of systems consistently. This vulnerability, formally referred to as “Copy Fail,” affects kernel versions spanning nearly a decade, dramatically expanding its attack surface and posing a significant threat to millions of deployments.

It is tracked as CVE-2026-31431. Several security researchers emphasize that this issue is not only significant when it comes to privilege escalation, but also stands out for its operational simplicity, cross-environment portability, and high exploitation success rate factors, which all contribute to its elevated threat profile and explain why it has been classified as an actively exploited vulnerability. 

Upon reviewing these findings, the Cybersecurity and Infrastructure Security Agency (CISA) has formally escalated the issue by adding the flaw to its Known Exploited Vulnerabilities (KEV) catalogue, which indicates confirmed instances of exploitation across multiple Linux distributions in the wild. 

The weakness, rated CVE-2026-31431, has a CVSS score of 7.8, and is considered to be a local privilege escalation vulnerability (LPE), which permits an unprivileged user with local access to elevate privileges to root privileges. However, its long-lasting undetected status, combined with its reliable exploitation pathway, makes it an operational risk even greater despite its moderate scoring. 

Under the designation “Copy Fail,” security researchers at Theori and Xint first identified and analyzed this issue. The issue arises from the incorrect transfer of resources between security contexts within Linux kernels, which can be exploited to bypass standard privilege boundaries in Linux. 

Several kernel patches, including versions 6.18.22, 6.19.12, and 7.0, have been released in response to this vulnerability, which has been actively exploited. Federal guidance urges organisations to prioritize updating based on the active exploitation status of the vulnerability. Additionally, its unusually low barrier to exploitation and wide ecosystem impact reinforce the urgency surrounding the flaw. 

According to researchers, an exploit can be executed with as little as 732 bytes of code, which significantly reduces the threshold for abuse and extends its reach across virtually all major Linux distributions since 2017. 

Unprivileged local users are able to manipulate the kernel's in-memory page cache of readable files, including setuid binaries, at the core of the vulnerability. By doing so, executables may be modified at runtime without altering files on disk. Injecting malicious code into trusted binaries such as /usr/bin/su results in root-level permissions for execution. This technique creates a stealthy pathway to privilege escalation. 

The security analysts at Wiz have stated that this in-memory tampering fundamentally undermines traditional integrity assumptions, since the page cache serves as the live execution layer for binaries. Furthermore, this risk is compounded when deploying large-scale Linux-based applications in modern cloud or containerised infrastructures. 

According to Kaspersky's analysis, environments that leverage container technologies, such as Docker, LXC, and Kubernetes, may be particularly vulnerable to threats. By default, container processes may interact with the AF_ALG subsystem if the algif_aead module is present in the host kernel, thus expanding the attack surface and enhancing privilege escalation across boundaries. 

In a technical sense, the vulnerability originates from a logic flaw within the Linux kernel's cryptographic pipeline, specifically the authenticated encryption template ("authenc"), where incomplete handling allows memory interactions that were not intended. 

Essentially, the vulnerability allows a local, unprivileged user to trigger a controlled four-byte write primitive into any readable file's page cache—a capability which appears to be constrained, but which has severe security implications when applied to executable memory. 

A key component of the exploit chain is the AF_ALG interface, which exposes kernel cryptographic operations to user space, as well as the splice() system call, which is used to redirect data flows away from conventional buffers and into the GPU page cache. 

By manipulating the in-memory representation of executables, attackers can subtly modify their execution behaviour without changing files on disk; when these modifications target setuid-root executables, it is trivial to escalate privileges to the full set of privileges. An analysis of the root cause of the issue has revealed that this vulnerability was caused by a 2017 optimization introduced in the Linux kernel version 4.14 that enabled in-place buffer reuse to improve performance but weakened memory isolation guarantees by accident, creating the conditions for an exploit. 

Several distributions have been validated empirically by researchers, including Ubuntu 24.04 LTS, Amazon Linux 2023, Red Hat Enterprise Linux 10.1, SUSE Linux Enterprise 16, and Debian, all of which have demonstrated near-perfect reliability in a compact Python proof-of-concept. Since this flaw affects virtually all distributed operating systems released since 2017, it has drawn comparisons with previous high-profile flaws, including Dirty Pipe (CVE-2022-0847). 

However, Copy Fail is more portable across kernel versions, more reliable, and is simpler to exploit, as it does not require specific offsets or narrowly scoped configurations to operate. To resolve the issue, kernel maintainers reverted the underlying optimization and reintroduced safer buffer handling mechanisms as part of versions 6.18.22, 6.19.12, and 7.0 of the kernel. 

Despite the fact that major distributions have begun to deploy patched kernels, inconsistencies in advisory publication have caused friction in coordinated response efforts, resulting in security researcher Will Dormann noting that some platforms have issued updates that do not consistently mention CVE-2026-31431, potentially stalling remediation and risk awareness at an enterprise level. 

An additional technical analysis of the flaw has revealed a practical exploitation pathway, illustrating how attackers can operationalise the vulnerability systematically in real-world environments. An attacker typically begins the attack sequence by identifying a Linux host or container that runs on a vulnerable kernel version, followed by the preparation of an attack trigger based on Python tailored specifically for the target machine. 

Upon initiating the exploit, it can be executed either as a standard user on the host system or within a compromised container without elevated privileges utilizing a low-privilege context. By utilizing the underlying flaw, the exploit can overwrite the kernel page cache precisely by four bytes, corrupting sensitive data structures that are managed by the kernel and enabling privilege escalation. Ultimately, this allows the attacker to obtain unrestricted root access by elevating their process to UID 0.

As a result of the active threat landscape, Federal Civilian Executive Branch (FCEB) agencies have been instructed to resolve the vulnerability by May 15, 2026, in accordance with patches released by Linux distributions affected by this vulnerability. 

In the case that immediate patching is not feasible, interim mitigation strategies, including disabling vulnerabilities, segmenting networks, and tightening access controls, have been recommended as a means of reducing exposure and containing potential compromise paths. 

As a result of the active exploitation status of CVE-2026-31431, its extensive reach across the Linux ecosystem, and its relative ease of weaponisation, it serves as a critical reminder of the risks that are inherent to longstanding kernel-level design decisions. As a result of the convergence of high reliability, minimal exploit complexity, and broad distribution exposures, organizations are under increasing pressure to verify their patch postures and expedite remediation. 

As a precautionary measure, security teams should prioritize kernel updates, closely monitor privilege escalation activity, and reassess controls around multi-tenant and containerised environments in which attack surfaces may be heightened. 

Threat actors will continue to exploit low-friction exploitation paths for exploitation, which will require timely mitigation and disciplined system hardening to ensure operational integrity and limit the impact of these kernel vulnerabilities.

AI-Powered License Plate Surveillance Sparks Urgent Push for Stronger Privacy Laws

 


The growing use of license plate tracking systems by companies like Flock Safety and Motorola’s VehicleManager has transformed routine drives into continuously recorded digital trails. Originally designed to capture license plate data, these systems have rapidly advanced into highly sophisticated surveillance tools. With the integration of artificial intelligence, cameras can now identify not only vehicles but also faces and other distinguishing features, silently building detailed records of individuals’ movements.

This technological shift raises an important question about the effectiveness of existing privacy protections. Laws governing surveillance vary widely across states, making it difficult to determine which frameworks are truly effective and where gaps remain.

To better understand the landscape, insights were gathered from Chad Marlow, senior policy counsel and lead for surveillance at the American Civil Liberties Union. He emphasized that meaningful privacy protection requires collective effort rather than individual action. "Collective action, rather than individual action, is required," Marlow told. He also warned, "I would caution that while Flock is the most problematic ALPR company in America, there are many other ALPR companies, like Axon and Motorola, that present serious privacy risks, so switching from Flock to Axon/Motorola ALPRs at best may constitute minimal harm reduction, but it is far from a solution."

Current legislation largely focuses on two major tools used by law enforcement: automatic license plate readers (ALPRs), which track vehicles, and drones equipped with AI-enabled cameras. Meanwhile, companies are expanding into traditional surveillance cameras capable of live monitoring and tracking individuals on the ground.

Advanced AI capabilities, such as Flock’s “Freeform” search feature, allow authorities to input open-ended queries and retrieve results from vast camera networks. These developments highlight the need for updated and comprehensive regulations. Several categories of laws are emerging as particularly impactful:

Restrictions on AI Surveillance Capabilities

Some of the most comprehensive laws limit what AI-powered cameras are allowed to detect and analyze. While not always targeting ALPRs directly, they regulate how data can be searched and used. Illinois stands out with its Biometric Information Privacy Act (BIPA), which protects sensitive identifiers like facial data and fingerprints and requires user consent. This law is so strict that certain features, such as facial recognition in consumer devices, are disabled within the state. However, many of these laws still exclude vehicle and license plate data, which often remains unprotected.

Limiting ALPR Use to Specific Investigations

Several states allow ALPR usage only under defined circumstances, such as serious criminal investigations. These restrictions prevent widespread deployment by private entities like homeowners associations or businesses and may also limit camera placement in certain public areas.

Mandatory Data Deletion Policies

One of the most effective privacy safeguards requires that collected data be deleted within a set timeframe unless tied to an active investigation. This prevents long-term tracking and profiling of individuals. As Marlow explained, "The idea of keeping a location dossier on every single person just in case one of us turns out to be a criminal is just about the most un-American approach to privacy I can imagine."

States like New Hampshire enforce extremely short data retention limits, requiring deletion within minutes if the data is not used. Others allow slightly longer windows. "For states that want a little more time to see if captured ALPR data is relevant to an ongoing investigation, keeping the data for a few days is sufficient," Marlow told me. "Some states, like Washington and Virginia, recently adopted 21-day limits, which is the very outermost acceptable limit." He further cautioned that prolonged storage makes it easier to build behavioral profiles "that can eviscerate individual privacy."

Restrictions on Data Sharing Across Jurisdictions

Certain states prohibit sharing surveillance data beyond state borders, including with federal agencies. These measures aim to limit access by organizations such as the Department of Homeland Security or ICE, though enforcing such restrictions remains a challenge. As Marlow noted, "Ideally, no data should be shared outside the collecting agency without a warrant," Marlow said, "But some states have chosen to prohibit data sharing outside of the state, which is better than nothing, and does limit some risks."

Approval and Oversight Requirements

Another approach involves requiring state-level approval before installing ALPR systems. The rigor of these processes varies significantly. For example, Vermont implemented strict approval mechanisms that ultimately discouraged adoption altogether, with no agencies using ALPR systems by 2025.

Despite these efforts, new privacy laws often face resistance from companies and law enforcement agencies, sometimes leading to legal disputes and slow enforcement. Additionally, legislative proposals frequently evolve during the approval process, making it important for citizens to stay informed and engaged.

Advocacy groups and public participation also play a critical role. Initiatives like The Plate Project encourage individuals to take part in privacy discussions and reforms. Local involvement, such as attending city council meetings, can influence decisions on surveillance technology before implementation.

Ultimately, as surveillance capabilities continue to expand, the effectiveness of privacy protections will depend on both robust legislation and active public oversight.

Kyber Ransomware Tests Post‑Quantum Encryption on Windows Networks

 

A new ransomware group named Kyber has pushed the envelope by experimenting with post‑quantum encryption in attacks on Windows‑based networks, according to recent cybersecurity analysis. The group has been observed targeting both Windows file servers and VMware ESXi platforms, showing a cross‑platform capability designed to disrupt critical enterprise infrastructure. In one confirmed incident, a major U.S. defense contractor fell victim to the strain, underscoring the threat’s seriousness. 

The Kyber variant deployed on Windows is written in Rust and uses a hybrid encryption scheme that combines classical and post‑quantum algorithms. Researchers at Rapid7 found that the Windows payload wraps AES‑256 file‑encryption keys using Kyber1024 (ML‑KEM1024), a lattice‑based key‑encapsulation mechanism standardized by NIST for quantum‑resistant cryptography. The strain also incorporates X25519 elliptic‑curve cryptography as an additional layer, creating a “belt‑and‑suspenders” approach to protect ransomware keys. 

Despite the marketing‑speak around “quantum‑proof” encryption, security experts note that Kyber’s use of post‑quantum crypto is largely symbolic at this stage. AES‑256 itself is already considered resistant to foreseeable quantum attacks, so relying on Kyber1024 mainly adds overhead without materially changing the practical impact for victims. Moreover, the Linux‑based ESXi encryptor does not actually use Kyber1024; it instead falls back to ChaCha8 and RSA‑4096, highlighting discrepancies between the ransomware’s claims and its implementation. 

Operationally, Kyber behaves like a modern ransomware strain: it seeks local administrator privileges, deletes Volume Shadow Copies via PowerShell and vssadmin, stops critical services, and encrypts files across shared drives. Windows files are typically appended with the .#~~~ extension, while the ESXi version uses .xhsyw, and each variant leaves a ransom note pointing to a Tor‑based leak site. The gang also runs a “Wall of Wonders” leak site to shame victims and pressure them into paying, a tactic increasingly common among ransomware‑as‑a‑service groups. 

For defenders, the lesson is that post‑quantum encryption in ransomware is more about optics than a game‑changer—for now. Organizations should still prioritize basics: strict privilege control, regular air‑gapped backups, monitoring unusual PowerShell and vssadmin activity, and rapid patching of ESXi and Windows servers. As quantum‑resistant standards mature, the broader cybersecurity community gains experience, even if attackers are the first to weaponize them in limited test‑bed campaigns like Kyber.

Iran Claims US Used Backdoors To Disable Networking Equipment During Conflict Amid Unverified Cyber Sabotage Reports

 

Midway through the incident, Iranian officials pointed fingers at American cyber operations. Devices made by firms like Cisco and Juniper began failing without warning. Power cycles hit Fortinet and MikroTik hardware even as Tehran limited external connections. Outages appeared tied to U.S. digital interference, according to local reports. Backdoors or coordinated botnet attacks were named as possible causes. Global discussion flared up almost immediately. Tensions between nations climbed higher amid unverified assertions. 

Network disruptions coincided too closely with military actions, some analysts noted These reports indicate Iranian officials see the outages as intentional interference, not equipment malfunction. What supports this view is the idea of harmful software hidden inside firmware or startup systems, set to activate remotely when signaled - possibly through satellite links. A different explanation considers dormant networks of infected machines, ready to shut down gadgets all at once if activated Still, no proof supports these statements. 

Confirming them becomes nearly impossible because Iran has restricted online access for long periods, blocking outside observers from seeing what happens inside its digital networks. Weeks of broad internet blackouts continue across the region, making verification harder than expected under such isolation. Nowhere more visible than in official outlets, the accusations gain strength through repeated links to earlier reports. 

Because evidence once surfaced via Edward Snowden, it gets reused to support current assertions about U.S. practices. Hardware tampering stories resurface when discussions turn to digital trust. From that point onward, examples of intercepted equipment serve as grounding points. Even so, connections drawn today rely heavily on incidents described years ago. 

Thus, suspicion persists within broader debates over tech control Even though claims are serious, public confirmation of deliberate backdoors or a remote "kill switch" remains absent. Still, specialists point out past flaws found in gear from various makers. Yet linking widespread breakdowns to one unified assault demands strong validation. What matters is proof - not just patterns - when connecting such events Nowhere is the worry over digital dependence more clear than in how fragile supply chains have become. 

A single compromised component might ripple across systems, simply because oversight lags behind complexity. Often, failures stem not from sabotage but from overlooked bugs or poor setup. Some breaches resemble accidents more than attacks, unfolding when neglected flaws are finally triggered. Rarely do we see deliberate tampering; far more common are gaps left open by routine mistakes. Hardware made abroad adds another layer of uncertainty, though the real issue may lie in how it's used, not where it's built Even now, global power struggles shape how cyber actions are seen. 

As nations admit using online assaults during warfare, such events fit within larger strategic patterns. Still, absent solid proof, today’s accusations serve more as tools in storytelling contests among states. Truth be told, understanding cyber warfare grows tougher each year, as unclear technology limits, narrow access to data, and national agendas overlap. Though shutting down systems secretly from afar might work on paper, without outside verification, such claims sit closer to suspicion than proof.

Vietnam-Linked “AccountDumpling” Campaign Exploits Google AppSheet to Hijack Thousands of Facebook Accounts

 


A newly uncovered cybercrime campaign linked to Vietnamese actors has been leveraging Google AppSheet as a phishing relay to send deceptive emails aimed at compromising Facebook accounts.

The operation, dubbed “AccountDumpling” by Guardio, revolves around stealing Facebook accounts and reselling them through illicit online marketplaces controlled by the attackers. Researchers estimate that nearly 30,000 accounts have been breached in this coordinated campaign.

"What we found wasn't a single phishing kit," security researcher Shaked Chen wrote in a report shared with The Hacker News. "It was a living operation with real-time operator panels, advanced evasion, continuous evolution and a criminal-commercial loop that quietly feeds on the same accounts it helps steal back."

This discovery highlights a broader trend of Vietnamese threat groups using increasingly sophisticated tactics to gain unauthorized access to Facebook accounts, which are later sold in underground markets for profit.

The attack chain typically begins with phishing emails sent to Facebook Business users, falsely posing as messages from Meta Support. These emails warn recipients that their accounts risk permanent suspension unless they submit an appeal. Notably, the emails originate from a legitimate-looking Google AppSheet address ("noreply@appsheet.com
"), helping them evade spam detection systems.

Victims are then directed to fraudulent websites designed to capture login credentials. Similar tactics were previously reported by KnowBe4 in May 2025.

In recent weeks, attackers have diversified their lures to trigger “Meta-related panic.” These include fake alerts about account bans, copyright violations, verification requests, job offers, and suspicious login activity. Guardio identified four primary attack patterns:
  • Phishing pages hosted on Netlify that mimic Facebook Help Center interfaces, collecting sensitive details such as birth dates, phone numbers, and ID documents, which are then transmitted to attacker-controlled Telegram channels.
  • Fake “blue badge” verification scams directing users through Vercel-hosted pages disguised as security checks, eventually harvesting credentials, business data, and two-factor authentication (2FA) codes.
  • Malicious PDF files hosted on Google Drive, posing as verification instructions, tricking users into submitting passwords, 2FA codes, ID images, and browser screenshots. These PDFs were created using a free Canva account.
  • Fraudulent job offers impersonating well-known brands such as WhatsApp, Adobe, Pinterest, Apple, and Coca-Cola to build trust and lure victims into further interaction on malicious platforms.
Across the first three attack clusters alone, associated Telegram channels were found to store around 30,000 victim records. Affected users span multiple countries, including the U.S., Italy, Canada, the Philippines, India, Spain, Australia, the U.K., Brazil, and Mexico, with many losing access to their accounts entirely.

Investigators traced part of the operation back to a Vietnamese individual after analyzing metadata embedded in the phishing PDFs, which listed the name “PHẠM TÀI TÂN” as the author. Further open-source investigation uncovered a website linked to this identity offering digital marketing services.

In a February 2023 post on X, the site’s account stated it "specializes in providing digital marketing services, marketing resources, and consulting on effective digital marketing strategies."

"Taken together, they form a consistent picture of a large, Vietnamese-based, mega operation," Chen said. "This campaign is bigger than a single AppSheet abuse. It's a window into the dark market around stolen Facebook assets, where access, business identity, ad reputation, and even account recovery have all become tradable commodities. Another entry in the pattern we keep surfacing: trusted platforms repurposed as delivery, hosting, and monetization layers."

Ransomware Campaign Leverages QEMU to Slip Past Enterprise Defences


 

In an effort to circumvent traditional security controls, hackers are increasingly relying on virtualisation as a covert execution layer, embedding malicious operations within QEMU environments. As observed in observed incidents, adversaries deployed concealed virtual machines in which tooling and command execution occurred largely beyond the detection range of endpoint detection systems, leaving minimal forensic artifacts on the operating system. 

In most cases, these environments are introduced as virtual disk images disguised under atypical file extensions such as .db or .dll and triggered by scheduled tasks with SYSTEM level privileges to create a parallel runtime that blends with legitimate processes.

According to analysts at Sophos, such techniques take advantage of the trust associated with widely used virtualization software. This pattern extends to platforms such as Microsoft Hyper-V, Oracle VM VirtualBox, and VMware, among others. These tactics reflect a broader strategic shift in which legitimate infrastructure is used to create isolated, low-noise environments that allow ransomware deployment while retaining effective anonymity to host-based defenses. Based on this pattern, researchers at Sophos have highlighted that QEMU misuse is not a recent development, but its resurgence in recent operations signals a renewed tactical emphasis on the use of QEMU. 

In late 2025, analysts have identified two separate ransomware campaigns, STAC4713 and STAC3725, which use virtualised environments to avoid detection, and STAC4713 is specifically associated with the financial-motivated PayoutsKing cluster of ransomware activities. 

An attacker established persistence for this campaign by creating a scheduled task, “TPMProfiler,” which executed a concealed virtual machine with SYSTEM-level privileges. A disk image deployment was implemented in which benign assets were deliberately disguised as benign assets, initially appearing as database files, but later taking on the appearance of dynamic link libraries in order to blend seamlessly into routine system artifacts. 

Once active, the virtual instance initiated reverse SSH tunneling mechanisms and port-forwarding mechanisms, forming covert communication channels that enabled sustained remote access while remaining outside the scope of conventional monitoring tools. 

During this isolated Alpine Linux environment, adversaries employed a compact toolkit that enabled tunneling, obfuscation, and data exfiltration, facilitating credential harvesting, the extraction of Active Directory databases, as well as the lateral exploration of network shares, all by utilizing legitimate system utilities. 

By integrating trusted binaries and hidden virtual infrastructure, this intentional convergence highlights a refined intrusion model where malicious activity is woven into normal system behavior, increasing the difficulty of detecting and responding to intrusions. 

A further investigation of STAC4713 has revealed its origin dates are November 2025, when it has been associated with the GOLD ENCOUNTER threat group and directly associated with PayoutsKing ransomware, a ransomware operation that differs from the conventional ransomware-as-a-service environment by executing intrusions without the assistance of affiliates. 

After emergence in mid-2025, the group has demonstrated a focus on hypervisor-centric environments, developing customized encryption tools for platforms such as VMware and VMware ESXi, signaling a deliberate shift towards infrastructure-level disruption. 

Additionally, a second campaign, STAC3725, appeared in February 2026. This campaign accessed the system via the CVE-2025-5777 exploit chain before deploying a malicious instance of ConnectWise ScreenConnect to secure persistence. Using this foothold, attackers orchestrated credential harvesting against Active Directory environments using a concealed QEMU virtual machine. 

The intrusion sequence in STAC4713 is well-planned, beginning with the creation of the “TPMProfiler” scheduled task which executes qemu-system-x86_64.exe with SYSTEM privileges. As a result, the boot-up of a virtual hard drive image disguised as benign files  initially "vault.db" and later renamed "bisrv.dll" -- was used to evade scrutiny.

In addition to this obfuscation, network manipulation techniques are employed, including port forwarding from non-standard ports such as 32567 and 22022 to SSH port 22, while reverse tunnels involving AdaptixC2 or OpenSSH are used to maintain persistent and covert connectivity to attacker-controlled networks. Embedded virtual machines operate on Alpine Linux 3.22.0 images preconfigured to offer a compact but robust toolkit that enables the rapid transfer of data and execution of commands. 

The toolkit includes Linker2, AdaptixC2, WireGuard's WireGuard Obfuscation Layer (wg-obfuscator), BusyBox, Chisel, and Rclone. In contrast, STAC3725 utilizes a more adaptive approach, compiling its toolset within a virtual environment in situ, including frameworks such as Impacket, KrbRelayX, Coercer, BloodHound.py, NetExec, Kerbrute, and Metasploit, as well as Python, Rust, Ruby, and C dependencies. 

Post-compromise activities include credential extraction, Kerberos user enumeration via Kerbrute, Active Directory reconnaissance via BloodHound, and payload staging over FTP channels, demonstrating a methodical and deeply embedded attack model in which virtualization serves not only as a concealment mechanism, but also as a platform for sustained intrusion. 

In sum, STAC4713 and STAC3725's activity indicate a calculated evolution in adversary tradecraft where virtualisation is no longer just a peripheral tactic for evasion but rather a critical component of adversary operations. A malicious workflow may be embedded within QEMU instances and aligned with trusted system processes, thus decoupling attackers' activities from the host environment. 

As a result, conventional endpoint controls will be unable to detect the attacker's activities while maintaining persistent, low-noise access. By employing disguised storage artifacts, executing tasks at the SYSTEM level, and utilizing encrypted communication channels, a disciplined approach to stealth is demonstrated, while the integration of credential harvesting, Active Directory reconnaissance, and lateral movement capabilities highlights the end-to-end nature of the intrusion. 

Sophos has observed that the resurgence of such campaigns indicates a broader industry challenge, in which legitimate infrastructure and administrative tools are increasingly repurposed to undermine defensive assumptions. 

Virtualised attack frameworks, with their convergence of concealment, persistence, and operational depth, provide a formidable vector for modern ransomware operations, requiring an extension of detection strategies beyond the host to virtual layers where adversaries are actively exploiting these vulnerabilities.

North Korea-Linked Hackers Target Crypto Platforms, $500M Stolen

 



Cybersecurity researchers are raising alarms over a developing pattern of cryptocurrency thefts linked to North Korean actors, with recent incidents suggesting a move from isolated breaches to a sustained and structured campaign. In a span of just over two weeks, attacks targeting the Drift trading platform and the Kelp protocol resulted in losses exceeding $500 million, pointing to a level of coordination that goes beyond opportunistic hacking.

What initially appeared to be separate security failures is now being viewed as part of a broader operational strategy, likely driven by the financial pressures faced by a heavily sanctioned state. Shortly after attackers used social engineering techniques to compromise Drift, another incident emerged involving Kelp, a restaking protocol integrated with cross-chain infrastructure.

The Kelp breach surfaces a noticeable turn in attacker behavior. Rather than exploiting traditional software bugs or stealing credentials, the attackers targeted fundamental design assumptions within decentralized systems. When examined together, both incidents indicate a deliberate escalation in efforts to extract value from the crypto ecosystem.

Alexander Urbelis of ENS Labs described the pattern as systematic rather than incidental, noting that the frequency and timing of these events resemble an operational cycle. He warned that reactive fixes alone are insufficient against threats that follow a structured tempo.


Breakdown of the Kelp exploit

Unlike many traditional cyberattacks, the Kelp incident did not involve bypassing encryption or stealing private keys. Instead, the system behaved as designed, but was fed manipulated data. Attackers altered the inputs that the protocol relied on, causing it to validate transactions that never actually occurred.

Urbelis explained that while cryptographic signatures can verify the origin of a message, they do not ensure the truthfulness of the information being transmitted. In simple terms, the system confirmed who sent the data, but failed to verify whether the data itself was accurate.

David Schwed of SVRN reinforced this view, stating that the exploit was not based on breaking cryptography, but on taking advantage of how the system had been configured.

A central weakness was Kelp’s dependence on a single verifier to validate cross-chain messages. While this approach improves efficiency and simplifies deployment, it removes an essential layer of security redundancy. In response, LayerZero has advised projects to adopt multiple independent verifiers, similar to requiring multiple approvals in traditional financial systems.

However, this recommendation has sparked criticism. Some experts argue that if a configuration is known to be unsafe, it should not be offered as a default option. Relying on users to manually implement secure settings, especially in complex environments, increases the likelihood of misconfiguration.


Contagion across interconnected systems

The impact of the Kelp exploit did not remain confined to a single platform. Decentralized finance systems are deeply interconnected, with assets frequently reused across multiple protocols. This creates a chain of dependencies, where a failure in one component can propagate across others.

Schwed described these assets as interconnected obligations, emphasizing that the strength of the system depends on each individual link. In this case, lending platforms such as Aave, which accepted the affected assets as collateral, experienced financial strain. This transformed an isolated breach into a broader ecosystem-level disruption.


Reassessing decentralization claims

The incident also exposes a disconnect between how decentralization is promoted and how systems actually function. A structure that relies on a single point of verification cannot be considered fully decentralized, despite being marketed as such.

Urbelis expanded on this by noting that decentralization is not an inherent feature, but the result of specific design decisions. Weaknesses often emerge in less visible layers, such as data validation or infrastructure components, which are increasingly becoming primary targets for attackers.

The activity aligns with a bigger change in strategy by groups such as Lazarus Group. Instead of focusing only on exchanges or obvious coding flaws, attackers are now targeting foundational infrastructure, including cross-chain bridges and restaking mechanisms.

These components play a critical role in enabling asset movement and reuse across blockchain networks. Their complexity, combined with the large volumes of value they handle, makes them particularly attractive targets.

Earlier waves of crypto-related attacks often focused on centralized platforms or easily identifiable vulnerabilities. In contrast, current operations are increasingly directed at the underlying systems that connect the ecosystem, which are harder to monitor and more prone to configuration errors.

Importantly, the Kelp exploit did not introduce a new category of vulnerability. Instead, it demonstrated how existing weaknesses remain exploitable when not properly addressed. The incident underscores a recurring issue in the industry: security measures are often treated as optional guidelines rather than mandatory requirements.

As attackers continue to enhance their methods and increase the pace of operations, this gap becomes easier to exploit and more costly for organizations. The growing sophistication of these campaigns suggests that the primary risk may not lie in unknown flaws, but in the failure to consistently address well-understood security challenges.

Terms And Conditions Grow Harder To Read As Platforms Limit Users’ Legal Rights Study Finds

 

Most people click "agree" without looking - yet those agreements keep getting harder to understand. Complexity rises, researchers note, just as user protections shrink. From Cambridge, a recent study points out expanded corporate access to personal information. Legal barriers grow tougher, making it more difficult to take firms to court. Lengthy clauses quietly reshape power, favoring businesses over individuals. Beginning with a project called the Transparency Hub, results emerge from systematic tracking of legal texts across 300-plus online platforms. 

Stored within it: twenty thousand iterations - past and present - of service conditions and privacy notices from apps like TikTok, among others. Over months, changes in wording reveal shifts in corporate approaches to personal information. What users agree to today may differ subtly from last year’s version, now preserved here. Visibility grows when updates accumulate, showing patterns once hidden beneath routine acceptance clicks. Surprisingly clear trends show a steady drop in how easily people can read service contracts. 

From 2016 to 2025, studies applying the Flesch-Kincaid method reveal nearly 86 percent demand skills typical of university readers. Because of this shift, grasping the full meaning behind digital consent has grown harder for most individuals. While signing up seems routine, the depth of understanding often lags behind. Away from mere complexity, attention turns to changing corporate approaches in handling disagreements. While once settled in open courtrooms, conflict resolution now leans on closed-door arbitration imposed by platform rules. 

A third-party referee reaches final judgments, yet clarity tends to fade behind closed processes. Users find their options shrinking when collective lawsuits are blocked. Even mediator choices sometimes rest with the businesses involved, quietly shaping outcomes. Newer artificial intelligence platforms like Anthropic and Perplexity AI also follow this pattern, embedding clauses that block participation in group litigation. Because of this, anyone feeling wronged has to file a personal claim - often pricier and weaker than joining others in court. A few companies allow narrow chances to decline the clause; however, acting fast after registration is usually required. 

Now appearing, this study arrives as officials across Europe weigh tighter rules for online services, focusing on effects tied to youth engagement. With France leading examples, followed by Spain, Portugal, and Denmark, governments test new steps aimed at tackling unease around digital privacy and web-based risks. One thing stands out: laws around online services are drifting further from what everyday users can grasp. 

Though written rules get longer and tighter, people must now sort through fine print that defines their digital freedoms - frequently unaware of what they’re agreeing to. While clarity lags behind complexity, personal responsibility quietly expands.

Lazarus Hackers Steal $290M from KelpDAO in Cross-Chain Exploit

 

KelpDAO has become the latest DeFi project to face a major security crisis after a $290 million heist that investigators say is likely tied to North Korea’s Lazarus Group. The attack targeted rsETH, a restaked ether asset used across several protocols, and drained about 116,500 tokens in a matter of hours. What makes the incident alarming is that the exploit did not appear to rely on a typical smart-contract flaw. Instead, it seems to have abused the project’s cross-chain verification setup, showing how a vulnerability in infrastructure can be just as damaging as a bug in code. 

According to the project’s public statement, KelpDAO detected suspicious cross-chain activity involving rsETH on April 18, 2026, and quickly paused rsETH contracts across Ethereum mainnet and Layer 2 networks. The team said it was working with LayerZero, Unichain, and other partners to investigate the breach and contain the damage. On-chain activity later showed that the stolen funds were moved through Tornado Cash, a common laundering route used to hide crypto theft. 

LayerZero’s early findings suggest the attack was highly coordinated. Researchers believe the hackers compromised RPC nodes and then used a DDoS campaign to force the system into failing over to poisoned infrastructure, where fraudulent cross-chain messages could be accepted as legitimate. In other words, the attackers appear to have tricked the bridge layer into believing a transfer had been properly authorized. That design weakness, rather than the asset itself, seems to have opened the door to the theft. 

The impact propagated quickly beyond KelpDAO. Because rsETH is accepted as collateral in lending markets, the exploit created risk for other DeFi platforms, including Compound, Euler, and Aave. Aave responded by freezing and blocking new deposits or borrowing using rsETH collateral. The wider market reaction highlights how one compromised bridge can ripple across multiple protocols, creating uncertainty far beyond the original target. 

The KelpDAO incident is another reminder that DeFi security depends not only on smart-contract audits, but also on the trust assumptions behind cross-chain systems. As protocols grow more interconnected, attackers need only find one weak link to trigger losses on a massive scale. For users and developers alike, the lesson is clear: layered security, diversified verification, and conservative bridge design are no longer optional.

PyTorch Lightning and Intercom Client Users Exposed to Credential Stealing Campaign


 

Python's software supply chain has been compromised, which targeted the popular PyPI package Lightning and exposed downstream machine learning environments to covert credential theft through a sophisticated software supply chain compromise. 

In conjunction with Aikido Security, OX Security, Socket, and StepSecurity researchers, versions 2.6.2 and 2.6.3, both published on April 30, 2026, have been modified maliciously as part of a broader intrusion related to the "Mini Shai-Hulud" campaign. 

A day earlier, the attack emerged through compromised SAP-related npm packages, underlining an ongoing trend of coordinated cross-ecosystem supply chain threats targeting high-value development environments. As a result of this compromise, organizations that utilize PyTorch Lightning, an open-source abstraction layer over PyTorch with over 31,000 stars on Github, face significant risk. 

In addition to being frequently embedded in dependency trees facilitating image classification, fine-tuning of large language models, diffusion workloads, and forecasting, Lightning's ubiquity increased the scope of the attack. 

A standard pip install lightning command was sufficient for the activation of the malicious chain exploitation did not require a sophisticated trigger. Upon installation of the compromised package, a hidden _runtime directory containing obfuscated JavaScript was created and executed automatically upon module import. This behavior was embedded within the package's initialization logic, ensuring that no additional user interaction was required to execute the script. 

Upon receiving the payload, a Python script (start.py) downloaded the Bun JavaScript runtime from external sources, followed by an 11 MB obfuscated file (router_runtime.js) which carried out the attack sequence in stages. An execution model utilizing JavaScript within a Python package utilizing cross-language JavaScript marks a significant evolution in attacker tradecraft. This complicates detection mechanisms focusing on single-language threats.

The malware's primary objective was credential harvesting. Analysis indicates that the malware targeted GitHub tokens, cloud service credentials spanning Amazon Web Services (AWS), Google Cloud Platform (GCP), and Azure, SSH keys, NPM tokens, Kubernetes configurations, Docker credentials, and environment variables systematically. Moreover, it was also capable of accessing cryptocurrency wallets and developer secrets stored within local and continuous integration/continuous delivery environments. 

By exploiting compromised credentials, stolen data was exfiltrated, often by automating commits to attacker-controlled GitHub repositories, which effectively concealed malicious activity within legitimate developer workflows, effectively masking malicious activity. There were distinctive markers that linked the campaign to the "Shai-Hulud" identity. 

Infected environments were observed creating public repositories with unusual naming conventions, including EveryBoiWeBuildIsaWormBoi and descriptions such as "A Mini Shai-Hulud has appeared." Attackers seem to be able to track compromised systems using these artifacts both as infection indicators and as signalling mechanisms. 

An effort has been made to link the activity to a financial motivated threat group referred to as TeamPCP, who has consistently demonstrated a focus on credential-rich development environments. According to OX Security, approximately 8.3 million downloads are likely to have been exposed as a result of the incident. 

As a result of the attack, Intercom-Client was compromised on the same day, further demonstrating the coordinated nature of the campaign. These incidents are the culmination of a series of supply chain breaches affecting npm, PyPI, and Docker Hub occurring between April 21 and 23 that suggest that a deliberate and sustained effort was made to infiltrate widely trusted software distribution channels between April 21 and 23.

The router_runtime.js payload was further examined in order to uncover extensive obfuscation and a clear focus on credential access and repository manipulation. Approximately 700 references were found to process and environment variables, over 460 references were identified to authentication tokens, and approximately 330 references were found to code repositories. 

Shai-Hulud operations are closely related to these patterns, which emphasize code reuse and iterative refinement of attack techniques. Furthermore, the payload was also capable of poisoning GitHub repositories and propagating through npm packages, raising concerns about secondary infection vectors beyond data exfiltration. 

The Lightning-AI GitHub repository became aware of the compromise when a user reported suspicious behavior under issue #21689 titled “Possible supply chain attack on version 2.6.3.” The report described a hidden execution chain that involved downloading the Bun runtime and executing a large obfuscated payload during module import. Despite this, the issue was later closed without clarification, thereby creating uncertainty concerning the project's initial response to the matter. 

Following Socket's disclosure in the Lightning-AI/pytorch-lightning repository, an even more unusual outcome occurred. In a matter of seconds, an account identified as pl-ghost closed the issue warning about compromised versions, and then posted a meme entitled "SILENCE DEVELOPER." This behavior has raised immediate concerns about potential account compromise since it was seen as anomalous. 

It was discovered that additional suspicious activity was related to the same account, including six rapid branch creations and deletions across multiple repositories within approximately 70 minutes, which were associated with this account. Several of these branches followed random 10-character lowercase naming conventions, which is consistent with the behavior of the Shai-Hulud worm, which probes for write access. 

As well as the branch impersonating Dependabot, another contained inconsistencies such as a misspelled identifier and incorrect naming structure, and all branches were deleted within seconds of being created, and none of them triggered workflows, indicating that automated probing was not being used in development. This combined evidence strongly suggests that the maintainer account may have been compromised, possibly using the same stolen credentials that enabled the malicious package publication on PyPI to be published. 

Upon learning of the incident, Python Package Index administrators quarantined Lightning versions that may have been affected. According to the maintainers, an investigation is underway in order to determine the cause, as the compromised releases introduced functionality that was consistent with credential harvesting methods. 

In the meantime, it is highly recommended that developers remove versions 2.6.2 and 2.6.3 from their environments, downgrade to version 2.6.1, and rotate any potentially exposed credentials across multiple cloud and development platforms, including API keys, tokens, and access credentials. Besides Python, the campaign is evolving beyond Python.

Researchers have confirmed that version 7.0.4 of the intercom-client package within the Node ecosystem has also been compromised, using a preinstall hook to execute credentials-stealing malware. Packagist also has been affected by the attack, where the intercom/intercom-php package (version 5.0.2) has been altered to include a Composer plugin that downloads the Bun runtime using a shell script (setup-intercom.sh) and executes the same obfuscated payload during installation and updates. 

As a result of encryption and exfiltration of stolen data to a remote server endpoint, the campaign's adaptability across ecosystems was further demonstrated. It has been determined that the GitHub account "nhur" has likely been compromised, and that the malicious intercom-client package was published through an automated Continuous Integration workflow triggered by a now-deleted branch of GitHub.

It appears that technical overlap exists among the npm, PyPI, and PHP ecosystems, with similarities in exfiltration techniques based on GitHub, credential targeting patterns, and payload structures. Furthermore, researchers have found similarities between these attacks and previous ones affecting organizations such as Checkmarx, Bitwarden, Telnyx, LiteLLM, and Aqua Security's Trivy, which supports the hypothesis that a single threat actor is responsible. 

Upon suspension from mainstream platforms, TeamPCP reportedly launched an onion-based platform on the dark web to expand its presence. Additionally, the actors have publicly referenced their ties with other cybercriminal groups, including LAPSUS$, while marketing their own tooling infrastructure. 

The developments suggest that the threat landscape is becoming increasingly organized and persistent, with supply chain attacks not just isolated incidents but a broader strategy for infiltrating and monetizing developer ecosystems. Lightning and Intercom compromises remain a stark reminder of the fragility of modern software supply chains as investigations continue. 

In light of the increasingly capable of pivoting across ecosystems and exploiting trusted distribution channels by attackers, organizations operating in cloud-native environments and AI-based environments have become increasingly reliant on robust dependency auditing, real-time monitoring, and rapid incident response. 

The incident highlights a critical juncture in software supply chain security, at which trusted ecosystems are increasingly being weaponised through stealthy, cross-language attack chains that are emerging from across the globe. The coordinated compromises of PyPI, npm, and Packagist packages, together with evidence of maintainer account abuse and automated propagation techniques, demonstrate a high level of operational maturity that challenges traditional methods of detection and response. 

It is now necessary to take proactive measures to guard against threats such as TeamPCP, who have demonstrated their capability to infiltrate developer workflows on a large scale. These include rigorous dependency auditing, tighter access controls, and continuous monitoring of build environments. 

It is imperative to safeguard the integrity of open-source components in order to maintain confidence in modern software development in the present threat landscape.