Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

CISA Highlights CVE-2026-31431 as an Active Linux Root Exploitation Risk


 

Several vulnerabilities in the Linux kernel have been recently disclosed that have attracted heightened scrutiny from the cybersecurity community, following evidence that they can be exploited to obtain full root-level control across a wide range of systems consistently. This vulnerability, formally referred to as “Copy Fail,” affects kernel versions spanning nearly a decade, dramatically expanding its attack surface and posing a significant threat to millions of deployments.

It is tracked as CVE-2026-31431. Several security researchers emphasize that this issue is not only significant when it comes to privilege escalation, but also stands out for its operational simplicity, cross-environment portability, and high exploitation success rate factors, which all contribute to its elevated threat profile and explain why it has been classified as an actively exploited vulnerability. 

Upon reviewing these findings, the Cybersecurity and Infrastructure Security Agency (CISA) has formally escalated the issue by adding the flaw to its Known Exploited Vulnerabilities (KEV) catalogue, which indicates confirmed instances of exploitation across multiple Linux distributions in the wild. 

The weakness, rated CVE-2026-31431, has a CVSS score of 7.8, and is considered to be a local privilege escalation vulnerability (LPE), which permits an unprivileged user with local access to elevate privileges to root privileges. However, its long-lasting undetected status, combined with its reliable exploitation pathway, makes it an operational risk even greater despite its moderate scoring. 

Under the designation “Copy Fail,” security researchers at Theori and Xint first identified and analyzed this issue. The issue arises from the incorrect transfer of resources between security contexts within Linux kernels, which can be exploited to bypass standard privilege boundaries in Linux. 

Several kernel patches, including versions 6.18.22, 6.19.12, and 7.0, have been released in response to this vulnerability, which has been actively exploited. Federal guidance urges organisations to prioritize updating based on the active exploitation status of the vulnerability. Additionally, its unusually low barrier to exploitation and wide ecosystem impact reinforce the urgency surrounding the flaw. 

According to researchers, an exploit can be executed with as little as 732 bytes of code, which significantly reduces the threshold for abuse and extends its reach across virtually all major Linux distributions since 2017. 

Unprivileged local users are able to manipulate the kernel's in-memory page cache of readable files, including setuid binaries, at the core of the vulnerability. By doing so, executables may be modified at runtime without altering files on disk. Injecting malicious code into trusted binaries such as /usr/bin/su results in root-level permissions for execution. This technique creates a stealthy pathway to privilege escalation. 

The security analysts at Wiz have stated that this in-memory tampering fundamentally undermines traditional integrity assumptions, since the page cache serves as the live execution layer for binaries. Furthermore, this risk is compounded when deploying large-scale Linux-based applications in modern cloud or containerised infrastructures. 

According to Kaspersky's analysis, environments that leverage container technologies, such as Docker, LXC, and Kubernetes, may be particularly vulnerable to threats. By default, container processes may interact with the AF_ALG subsystem if the algif_aead module is present in the host kernel, thus expanding the attack surface and enhancing privilege escalation across boundaries. 

In a technical sense, the vulnerability originates from a logic flaw within the Linux kernel's cryptographic pipeline, specifically the authenticated encryption template ("authenc"), where incomplete handling allows memory interactions that were not intended. 

Essentially, the vulnerability allows a local, unprivileged user to trigger a controlled four-byte write primitive into any readable file's page cache—a capability which appears to be constrained, but which has severe security implications when applied to executable memory. 

A key component of the exploit chain is the AF_ALG interface, which exposes kernel cryptographic operations to user space, as well as the splice() system call, which is used to redirect data flows away from conventional buffers and into the GPU page cache. 

By manipulating the in-memory representation of executables, attackers can subtly modify their execution behaviour without changing files on disk; when these modifications target setuid-root executables, it is trivial to escalate privileges to the full set of privileges. An analysis of the root cause of the issue has revealed that this vulnerability was caused by a 2017 optimization introduced in the Linux kernel version 4.14 that enabled in-place buffer reuse to improve performance but weakened memory isolation guarantees by accident, creating the conditions for an exploit. 

Several distributions have been validated empirically by researchers, including Ubuntu 24.04 LTS, Amazon Linux 2023, Red Hat Enterprise Linux 10.1, SUSE Linux Enterprise 16, and Debian, all of which have demonstrated near-perfect reliability in a compact Python proof-of-concept. Since this flaw affects virtually all distributed operating systems released since 2017, it has drawn comparisons with previous high-profile flaws, including Dirty Pipe (CVE-2022-0847). 

However, Copy Fail is more portable across kernel versions, more reliable, and is simpler to exploit, as it does not require specific offsets or narrowly scoped configurations to operate. To resolve the issue, kernel maintainers reverted the underlying optimization and reintroduced safer buffer handling mechanisms as part of versions 6.18.22, 6.19.12, and 7.0 of the kernel. 

Despite the fact that major distributions have begun to deploy patched kernels, inconsistencies in advisory publication have caused friction in coordinated response efforts, resulting in security researcher Will Dormann noting that some platforms have issued updates that do not consistently mention CVE-2026-31431, potentially stalling remediation and risk awareness at an enterprise level. 

An additional technical analysis of the flaw has revealed a practical exploitation pathway, illustrating how attackers can operationalise the vulnerability systematically in real-world environments. An attacker typically begins the attack sequence by identifying a Linux host or container that runs on a vulnerable kernel version, followed by the preparation of an attack trigger based on Python tailored specifically for the target machine. 

Upon initiating the exploit, it can be executed either as a standard user on the host system or within a compromised container without elevated privileges utilizing a low-privilege context. By utilizing the underlying flaw, the exploit can overwrite the kernel page cache precisely by four bytes, corrupting sensitive data structures that are managed by the kernel and enabling privilege escalation. Ultimately, this allows the attacker to obtain unrestricted root access by elevating their process to UID 0.

As a result of the active threat landscape, Federal Civilian Executive Branch (FCEB) agencies have been instructed to resolve the vulnerability by May 15, 2026, in accordance with patches released by Linux distributions affected by this vulnerability. 

In the case that immediate patching is not feasible, interim mitigation strategies, including disabling vulnerabilities, segmenting networks, and tightening access controls, have been recommended as a means of reducing exposure and containing potential compromise paths. 

As a result of the active exploitation status of CVE-2026-31431, its extensive reach across the Linux ecosystem, and its relative ease of weaponisation, it serves as a critical reminder of the risks that are inherent to longstanding kernel-level design decisions. As a result of the convergence of high reliability, minimal exploit complexity, and broad distribution exposures, organizations are under increasing pressure to verify their patch postures and expedite remediation. 

As a precautionary measure, security teams should prioritize kernel updates, closely monitor privilege escalation activity, and reassess controls around multi-tenant and containerised environments in which attack surfaces may be heightened. 

Threat actors will continue to exploit low-friction exploitation paths for exploitation, which will require timely mitigation and disciplined system hardening to ensure operational integrity and limit the impact of these kernel vulnerabilities.

Kyber Ransomware Tests Post‑Quantum Encryption on Windows Networks

 

A new ransomware group named Kyber has pushed the envelope by experimenting with post‑quantum encryption in attacks on Windows‑based networks, according to recent cybersecurity analysis. The group has been observed targeting both Windows file servers and VMware ESXi platforms, showing a cross‑platform capability designed to disrupt critical enterprise infrastructure. In one confirmed incident, a major U.S. defense contractor fell victim to the strain, underscoring the threat’s seriousness. 

The Kyber variant deployed on Windows is written in Rust and uses a hybrid encryption scheme that combines classical and post‑quantum algorithms. Researchers at Rapid7 found that the Windows payload wraps AES‑256 file‑encryption keys using Kyber1024 (ML‑KEM1024), a lattice‑based key‑encapsulation mechanism standardized by NIST for quantum‑resistant cryptography. The strain also incorporates X25519 elliptic‑curve cryptography as an additional layer, creating a “belt‑and‑suspenders” approach to protect ransomware keys. 

Despite the marketing‑speak around “quantum‑proof” encryption, security experts note that Kyber’s use of post‑quantum crypto is largely symbolic at this stage. AES‑256 itself is already considered resistant to foreseeable quantum attacks, so relying on Kyber1024 mainly adds overhead without materially changing the practical impact for victims. Moreover, the Linux‑based ESXi encryptor does not actually use Kyber1024; it instead falls back to ChaCha8 and RSA‑4096, highlighting discrepancies between the ransomware’s claims and its implementation. 

Operationally, Kyber behaves like a modern ransomware strain: it seeks local administrator privileges, deletes Volume Shadow Copies via PowerShell and vssadmin, stops critical services, and encrypts files across shared drives. Windows files are typically appended with the .#~~~ extension, while the ESXi version uses .xhsyw, and each variant leaves a ransom note pointing to a Tor‑based leak site. The gang also runs a “Wall of Wonders” leak site to shame victims and pressure them into paying, a tactic increasingly common among ransomware‑as‑a‑service groups. 

For defenders, the lesson is that post‑quantum encryption in ransomware is more about optics than a game‑changer—for now. Organizations should still prioritize basics: strict privilege control, regular air‑gapped backups, monitoring unusual PowerShell and vssadmin activity, and rapid patching of ESXi and Windows servers. As quantum‑resistant standards mature, the broader cybersecurity community gains experience, even if attackers are the first to weaponize them in limited test‑bed campaigns like Kyber.

Iran Claims US Used Backdoors To Disable Networking Equipment During Conflict Amid Unverified Cyber Sabotage Reports

 

Midway through the incident, Iranian officials pointed fingers at American cyber operations. Devices made by firms like Cisco and Juniper began failing without warning. Power cycles hit Fortinet and MikroTik hardware even as Tehran limited external connections. Outages appeared tied to U.S. digital interference, according to local reports. Backdoors or coordinated botnet attacks were named as possible causes. Global discussion flared up almost immediately. Tensions between nations climbed higher amid unverified assertions. 

Network disruptions coincided too closely with military actions, some analysts noted These reports indicate Iranian officials see the outages as intentional interference, not equipment malfunction. What supports this view is the idea of harmful software hidden inside firmware or startup systems, set to activate remotely when signaled - possibly through satellite links. A different explanation considers dormant networks of infected machines, ready to shut down gadgets all at once if activated Still, no proof supports these statements. 

Confirming them becomes nearly impossible because Iran has restricted online access for long periods, blocking outside observers from seeing what happens inside its digital networks. Weeks of broad internet blackouts continue across the region, making verification harder than expected under such isolation. Nowhere more visible than in official outlets, the accusations gain strength through repeated links to earlier reports. 

Because evidence once surfaced via Edward Snowden, it gets reused to support current assertions about U.S. practices. Hardware tampering stories resurface when discussions turn to digital trust. From that point onward, examples of intercepted equipment serve as grounding points. Even so, connections drawn today rely heavily on incidents described years ago. 

Thus, suspicion persists within broader debates over tech control Even though claims are serious, public confirmation of deliberate backdoors or a remote "kill switch" remains absent. Still, specialists point out past flaws found in gear from various makers. Yet linking widespread breakdowns to one unified assault demands strong validation. What matters is proof - not just patterns - when connecting such events Nowhere is the worry over digital dependence more clear than in how fragile supply chains have become. 

A single compromised component might ripple across systems, simply because oversight lags behind complexity. Often, failures stem not from sabotage but from overlooked bugs or poor setup. Some breaches resemble accidents more than attacks, unfolding when neglected flaws are finally triggered. Rarely do we see deliberate tampering; far more common are gaps left open by routine mistakes. Hardware made abroad adds another layer of uncertainty, though the real issue may lie in how it's used, not where it's built Even now, global power struggles shape how cyber actions are seen. 

As nations admit using online assaults during warfare, such events fit within larger strategic patterns. Still, absent solid proof, today’s accusations serve more as tools in storytelling contests among states. Truth be told, understanding cyber warfare grows tougher each year, as unclear technology limits, narrow access to data, and national agendas overlap. Though shutting down systems secretly from afar might work on paper, without outside verification, such claims sit closer to suspicion than proof.

Ransomware Campaign Leverages QEMU to Slip Past Enterprise Defences


 

In an effort to circumvent traditional security controls, hackers are increasingly relying on virtualisation as a covert execution layer, embedding malicious operations within QEMU environments. As observed in observed incidents, adversaries deployed concealed virtual machines in which tooling and command execution occurred largely beyond the detection range of endpoint detection systems, leaving minimal forensic artifacts on the operating system. 

In most cases, these environments are introduced as virtual disk images disguised under atypical file extensions such as .db or .dll and triggered by scheduled tasks with SYSTEM level privileges to create a parallel runtime that blends with legitimate processes.

According to analysts at Sophos, such techniques take advantage of the trust associated with widely used virtualization software. This pattern extends to platforms such as Microsoft Hyper-V, Oracle VM VirtualBox, and VMware, among others. These tactics reflect a broader strategic shift in which legitimate infrastructure is used to create isolated, low-noise environments that allow ransomware deployment while retaining effective anonymity to host-based defenses. Based on this pattern, researchers at Sophos have highlighted that QEMU misuse is not a recent development, but its resurgence in recent operations signals a renewed tactical emphasis on the use of QEMU. 

In late 2025, analysts have identified two separate ransomware campaigns, STAC4713 and STAC3725, which use virtualised environments to avoid detection, and STAC4713 is specifically associated with the financial-motivated PayoutsKing cluster of ransomware activities. 

An attacker established persistence for this campaign by creating a scheduled task, “TPMProfiler,” which executed a concealed virtual machine with SYSTEM-level privileges. A disk image deployment was implemented in which benign assets were deliberately disguised as benign assets, initially appearing as database files, but later taking on the appearance of dynamic link libraries in order to blend seamlessly into routine system artifacts. 

Once active, the virtual instance initiated reverse SSH tunneling mechanisms and port-forwarding mechanisms, forming covert communication channels that enabled sustained remote access while remaining outside the scope of conventional monitoring tools. 

During this isolated Alpine Linux environment, adversaries employed a compact toolkit that enabled tunneling, obfuscation, and data exfiltration, facilitating credential harvesting, the extraction of Active Directory databases, as well as the lateral exploration of network shares, all by utilizing legitimate system utilities. 

By integrating trusted binaries and hidden virtual infrastructure, this intentional convergence highlights a refined intrusion model where malicious activity is woven into normal system behavior, increasing the difficulty of detecting and responding to intrusions. 

A further investigation of STAC4713 has revealed its origin dates are November 2025, when it has been associated with the GOLD ENCOUNTER threat group and directly associated with PayoutsKing ransomware, a ransomware operation that differs from the conventional ransomware-as-a-service environment by executing intrusions without the assistance of affiliates. 

After emergence in mid-2025, the group has demonstrated a focus on hypervisor-centric environments, developing customized encryption tools for platforms such as VMware and VMware ESXi, signaling a deliberate shift towards infrastructure-level disruption. 

Additionally, a second campaign, STAC3725, appeared in February 2026. This campaign accessed the system via the CVE-2025-5777 exploit chain before deploying a malicious instance of ConnectWise ScreenConnect to secure persistence. Using this foothold, attackers orchestrated credential harvesting against Active Directory environments using a concealed QEMU virtual machine. 

The intrusion sequence in STAC4713 is well-planned, beginning with the creation of the “TPMProfiler” scheduled task which executes qemu-system-x86_64.exe with SYSTEM privileges. As a result, the boot-up of a virtual hard drive image disguised as benign files  initially "vault.db" and later renamed "bisrv.dll" -- was used to evade scrutiny.

In addition to this obfuscation, network manipulation techniques are employed, including port forwarding from non-standard ports such as 32567 and 22022 to SSH port 22, while reverse tunnels involving AdaptixC2 or OpenSSH are used to maintain persistent and covert connectivity to attacker-controlled networks. Embedded virtual machines operate on Alpine Linux 3.22.0 images preconfigured to offer a compact but robust toolkit that enables the rapid transfer of data and execution of commands. 

The toolkit includes Linker2, AdaptixC2, WireGuard's WireGuard Obfuscation Layer (wg-obfuscator), BusyBox, Chisel, and Rclone. In contrast, STAC3725 utilizes a more adaptive approach, compiling its toolset within a virtual environment in situ, including frameworks such as Impacket, KrbRelayX, Coercer, BloodHound.py, NetExec, Kerbrute, and Metasploit, as well as Python, Rust, Ruby, and C dependencies. 

Post-compromise activities include credential extraction, Kerberos user enumeration via Kerbrute, Active Directory reconnaissance via BloodHound, and payload staging over FTP channels, demonstrating a methodical and deeply embedded attack model in which virtualization serves not only as a concealment mechanism, but also as a platform for sustained intrusion. 

In sum, STAC4713 and STAC3725's activity indicate a calculated evolution in adversary tradecraft where virtualisation is no longer just a peripheral tactic for evasion but rather a critical component of adversary operations. A malicious workflow may be embedded within QEMU instances and aligned with trusted system processes, thus decoupling attackers' activities from the host environment. 

As a result, conventional endpoint controls will be unable to detect the attacker's activities while maintaining persistent, low-noise access. By employing disguised storage artifacts, executing tasks at the SYSTEM level, and utilizing encrypted communication channels, a disciplined approach to stealth is demonstrated, while the integration of credential harvesting, Active Directory reconnaissance, and lateral movement capabilities highlights the end-to-end nature of the intrusion. 

Sophos has observed that the resurgence of such campaigns indicates a broader industry challenge, in which legitimate infrastructure and administrative tools are increasingly repurposed to undermine defensive assumptions. 

Virtualised attack frameworks, with their convergence of concealment, persistence, and operational depth, provide a formidable vector for modern ransomware operations, requiring an extension of detection strategies beyond the host to virtual layers where adversaries are actively exploiting these vulnerabilities.

North Korea-Linked Hackers Target Crypto Platforms, $500M Stolen

 



Cybersecurity researchers are raising alarms over a developing pattern of cryptocurrency thefts linked to North Korean actors, with recent incidents suggesting a move from isolated breaches to a sustained and structured campaign. In a span of just over two weeks, attacks targeting the Drift trading platform and the Kelp protocol resulted in losses exceeding $500 million, pointing to a level of coordination that goes beyond opportunistic hacking.

What initially appeared to be separate security failures is now being viewed as part of a broader operational strategy, likely driven by the financial pressures faced by a heavily sanctioned state. Shortly after attackers used social engineering techniques to compromise Drift, another incident emerged involving Kelp, a restaking protocol integrated with cross-chain infrastructure.

The Kelp breach surfaces a noticeable turn in attacker behavior. Rather than exploiting traditional software bugs or stealing credentials, the attackers targeted fundamental design assumptions within decentralized systems. When examined together, both incidents indicate a deliberate escalation in efforts to extract value from the crypto ecosystem.

Alexander Urbelis of ENS Labs described the pattern as systematic rather than incidental, noting that the frequency and timing of these events resemble an operational cycle. He warned that reactive fixes alone are insufficient against threats that follow a structured tempo.


Breakdown of the Kelp exploit

Unlike many traditional cyberattacks, the Kelp incident did not involve bypassing encryption or stealing private keys. Instead, the system behaved as designed, but was fed manipulated data. Attackers altered the inputs that the protocol relied on, causing it to validate transactions that never actually occurred.

Urbelis explained that while cryptographic signatures can verify the origin of a message, they do not ensure the truthfulness of the information being transmitted. In simple terms, the system confirmed who sent the data, but failed to verify whether the data itself was accurate.

David Schwed of SVRN reinforced this view, stating that the exploit was not based on breaking cryptography, but on taking advantage of how the system had been configured.

A central weakness was Kelp’s dependence on a single verifier to validate cross-chain messages. While this approach improves efficiency and simplifies deployment, it removes an essential layer of security redundancy. In response, LayerZero has advised projects to adopt multiple independent verifiers, similar to requiring multiple approvals in traditional financial systems.

However, this recommendation has sparked criticism. Some experts argue that if a configuration is known to be unsafe, it should not be offered as a default option. Relying on users to manually implement secure settings, especially in complex environments, increases the likelihood of misconfiguration.


Contagion across interconnected systems

The impact of the Kelp exploit did not remain confined to a single platform. Decentralized finance systems are deeply interconnected, with assets frequently reused across multiple protocols. This creates a chain of dependencies, where a failure in one component can propagate across others.

Schwed described these assets as interconnected obligations, emphasizing that the strength of the system depends on each individual link. In this case, lending platforms such as Aave, which accepted the affected assets as collateral, experienced financial strain. This transformed an isolated breach into a broader ecosystem-level disruption.


Reassessing decentralization claims

The incident also exposes a disconnect between how decentralization is promoted and how systems actually function. A structure that relies on a single point of verification cannot be considered fully decentralized, despite being marketed as such.

Urbelis expanded on this by noting that decentralization is not an inherent feature, but the result of specific design decisions. Weaknesses often emerge in less visible layers, such as data validation or infrastructure components, which are increasingly becoming primary targets for attackers.

The activity aligns with a bigger change in strategy by groups such as Lazarus Group. Instead of focusing only on exchanges or obvious coding flaws, attackers are now targeting foundational infrastructure, including cross-chain bridges and restaking mechanisms.

These components play a critical role in enabling asset movement and reuse across blockchain networks. Their complexity, combined with the large volumes of value they handle, makes them particularly attractive targets.

Earlier waves of crypto-related attacks often focused on centralized platforms or easily identifiable vulnerabilities. In contrast, current operations are increasingly directed at the underlying systems that connect the ecosystem, which are harder to monitor and more prone to configuration errors.

Importantly, the Kelp exploit did not introduce a new category of vulnerability. Instead, it demonstrated how existing weaknesses remain exploitable when not properly addressed. The incident underscores a recurring issue in the industry: security measures are often treated as optional guidelines rather than mandatory requirements.

As attackers continue to enhance their methods and increase the pace of operations, this gap becomes easier to exploit and more costly for organizations. The growing sophistication of these campaigns suggests that the primary risk may not lie in unknown flaws, but in the failure to consistently address well-understood security challenges.

Terms And Conditions Grow Harder To Read As Platforms Limit Users’ Legal Rights Study Finds

 

Most people click "agree" without looking - yet those agreements keep getting harder to understand. Complexity rises, researchers note, just as user protections shrink. From Cambridge, a recent study points out expanded corporate access to personal information. Legal barriers grow tougher, making it more difficult to take firms to court. Lengthy clauses quietly reshape power, favoring businesses over individuals. Beginning with a project called the Transparency Hub, results emerge from systematic tracking of legal texts across 300-plus online platforms. 

Stored within it: twenty thousand iterations - past and present - of service conditions and privacy notices from apps like TikTok, among others. Over months, changes in wording reveal shifts in corporate approaches to personal information. What users agree to today may differ subtly from last year’s version, now preserved here. Visibility grows when updates accumulate, showing patterns once hidden beneath routine acceptance clicks. Surprisingly clear trends show a steady drop in how easily people can read service contracts. 

From 2016 to 2025, studies applying the Flesch-Kincaid method reveal nearly 86 percent demand skills typical of university readers. Because of this shift, grasping the full meaning behind digital consent has grown harder for most individuals. While signing up seems routine, the depth of understanding often lags behind. Away from mere complexity, attention turns to changing corporate approaches in handling disagreements. While once settled in open courtrooms, conflict resolution now leans on closed-door arbitration imposed by platform rules. 

A third-party referee reaches final judgments, yet clarity tends to fade behind closed processes. Users find their options shrinking when collective lawsuits are blocked. Even mediator choices sometimes rest with the businesses involved, quietly shaping outcomes. Newer artificial intelligence platforms like Anthropic and Perplexity AI also follow this pattern, embedding clauses that block participation in group litigation. Because of this, anyone feeling wronged has to file a personal claim - often pricier and weaker than joining others in court. A few companies allow narrow chances to decline the clause; however, acting fast after registration is usually required. 

Now appearing, this study arrives as officials across Europe weigh tighter rules for online services, focusing on effects tied to youth engagement. With France leading examples, followed by Spain, Portugal, and Denmark, governments test new steps aimed at tackling unease around digital privacy and web-based risks. One thing stands out: laws around online services are drifting further from what everyday users can grasp. 

Though written rules get longer and tighter, people must now sort through fine print that defines their digital freedoms - frequently unaware of what they’re agreeing to. While clarity lags behind complexity, personal responsibility quietly expands.

Lazarus Hackers Steal $290M from KelpDAO in Cross-Chain Exploit

 

KelpDAO has become the latest DeFi project to face a major security crisis after a $290 million heist that investigators say is likely tied to North Korea’s Lazarus Group. The attack targeted rsETH, a restaked ether asset used across several protocols, and drained about 116,500 tokens in a matter of hours. What makes the incident alarming is that the exploit did not appear to rely on a typical smart-contract flaw. Instead, it seems to have abused the project’s cross-chain verification setup, showing how a vulnerability in infrastructure can be just as damaging as a bug in code. 

According to the project’s public statement, KelpDAO detected suspicious cross-chain activity involving rsETH on April 18, 2026, and quickly paused rsETH contracts across Ethereum mainnet and Layer 2 networks. The team said it was working with LayerZero, Unichain, and other partners to investigate the breach and contain the damage. On-chain activity later showed that the stolen funds were moved through Tornado Cash, a common laundering route used to hide crypto theft. 

LayerZero’s early findings suggest the attack was highly coordinated. Researchers believe the hackers compromised RPC nodes and then used a DDoS campaign to force the system into failing over to poisoned infrastructure, where fraudulent cross-chain messages could be accepted as legitimate. In other words, the attackers appear to have tricked the bridge layer into believing a transfer had been properly authorized. That design weakness, rather than the asset itself, seems to have opened the door to the theft. 

The impact propagated quickly beyond KelpDAO. Because rsETH is accepted as collateral in lending markets, the exploit created risk for other DeFi platforms, including Compound, Euler, and Aave. Aave responded by freezing and blocking new deposits or borrowing using rsETH collateral. The wider market reaction highlights how one compromised bridge can ripple across multiple protocols, creating uncertainty far beyond the original target. 

The KelpDAO incident is another reminder that DeFi security depends not only on smart-contract audits, but also on the trust assumptions behind cross-chain systems. As protocols grow more interconnected, attackers need only find one weak link to trigger losses on a massive scale. For users and developers alike, the lesson is clear: layered security, diversified verification, and conservative bridge design are no longer optional.

PyTorch Lightning and Intercom Client Users Exposed to Credential Stealing Campaign


 

Python's software supply chain has been compromised, which targeted the popular PyPI package Lightning and exposed downstream machine learning environments to covert credential theft through a sophisticated software supply chain compromise. 

In conjunction with Aikido Security, OX Security, Socket, and StepSecurity researchers, versions 2.6.2 and 2.6.3, both published on April 30, 2026, have been modified maliciously as part of a broader intrusion related to the "Mini Shai-Hulud" campaign. 

A day earlier, the attack emerged through compromised SAP-related npm packages, underlining an ongoing trend of coordinated cross-ecosystem supply chain threats targeting high-value development environments. As a result of this compromise, organizations that utilize PyTorch Lightning, an open-source abstraction layer over PyTorch with over 31,000 stars on Github, face significant risk. 

In addition to being frequently embedded in dependency trees facilitating image classification, fine-tuning of large language models, diffusion workloads, and forecasting, Lightning's ubiquity increased the scope of the attack. 

A standard pip install lightning command was sufficient for the activation of the malicious chain exploitation did not require a sophisticated trigger. Upon installation of the compromised package, a hidden _runtime directory containing obfuscated JavaScript was created and executed automatically upon module import. This behavior was embedded within the package's initialization logic, ensuring that no additional user interaction was required to execute the script. 

Upon receiving the payload, a Python script (start.py) downloaded the Bun JavaScript runtime from external sources, followed by an 11 MB obfuscated file (router_runtime.js) which carried out the attack sequence in stages. An execution model utilizing JavaScript within a Python package utilizing cross-language JavaScript marks a significant evolution in attacker tradecraft. This complicates detection mechanisms focusing on single-language threats.

The malware's primary objective was credential harvesting. Analysis indicates that the malware targeted GitHub tokens, cloud service credentials spanning Amazon Web Services (AWS), Google Cloud Platform (GCP), and Azure, SSH keys, NPM tokens, Kubernetes configurations, Docker credentials, and environment variables systematically. Moreover, it was also capable of accessing cryptocurrency wallets and developer secrets stored within local and continuous integration/continuous delivery environments. 

By exploiting compromised credentials, stolen data was exfiltrated, often by automating commits to attacker-controlled GitHub repositories, which effectively concealed malicious activity within legitimate developer workflows, effectively masking malicious activity. There were distinctive markers that linked the campaign to the "Shai-Hulud" identity. 

Infected environments were observed creating public repositories with unusual naming conventions, including EveryBoiWeBuildIsaWormBoi and descriptions such as "A Mini Shai-Hulud has appeared." Attackers seem to be able to track compromised systems using these artifacts both as infection indicators and as signalling mechanisms. 

An effort has been made to link the activity to a financial motivated threat group referred to as TeamPCP, who has consistently demonstrated a focus on credential-rich development environments. According to OX Security, approximately 8.3 million downloads are likely to have been exposed as a result of the incident. 

As a result of the attack, Intercom-Client was compromised on the same day, further demonstrating the coordinated nature of the campaign. These incidents are the culmination of a series of supply chain breaches affecting npm, PyPI, and Docker Hub occurring between April 21 and 23 that suggest that a deliberate and sustained effort was made to infiltrate widely trusted software distribution channels between April 21 and 23.

The router_runtime.js payload was further examined in order to uncover extensive obfuscation and a clear focus on credential access and repository manipulation. Approximately 700 references were found to process and environment variables, over 460 references were identified to authentication tokens, and approximately 330 references were found to code repositories. 

Shai-Hulud operations are closely related to these patterns, which emphasize code reuse and iterative refinement of attack techniques. Furthermore, the payload was also capable of poisoning GitHub repositories and propagating through npm packages, raising concerns about secondary infection vectors beyond data exfiltration. 

The Lightning-AI GitHub repository became aware of the compromise when a user reported suspicious behavior under issue #21689 titled “Possible supply chain attack on version 2.6.3.” The report described a hidden execution chain that involved downloading the Bun runtime and executing a large obfuscated payload during module import. Despite this, the issue was later closed without clarification, thereby creating uncertainty concerning the project's initial response to the matter. 

Following Socket's disclosure in the Lightning-AI/pytorch-lightning repository, an even more unusual outcome occurred. In a matter of seconds, an account identified as pl-ghost closed the issue warning about compromised versions, and then posted a meme entitled "SILENCE DEVELOPER." This behavior has raised immediate concerns about potential account compromise since it was seen as anomalous. 

It was discovered that additional suspicious activity was related to the same account, including six rapid branch creations and deletions across multiple repositories within approximately 70 minutes, which were associated with this account. Several of these branches followed random 10-character lowercase naming conventions, which is consistent with the behavior of the Shai-Hulud worm, which probes for write access. 

As well as the branch impersonating Dependabot, another contained inconsistencies such as a misspelled identifier and incorrect naming structure, and all branches were deleted within seconds of being created, and none of them triggered workflows, indicating that automated probing was not being used in development. This combined evidence strongly suggests that the maintainer account may have been compromised, possibly using the same stolen credentials that enabled the malicious package publication on PyPI to be published. 

Upon learning of the incident, Python Package Index administrators quarantined Lightning versions that may have been affected. According to the maintainers, an investigation is underway in order to determine the cause, as the compromised releases introduced functionality that was consistent with credential harvesting methods. 

In the meantime, it is highly recommended that developers remove versions 2.6.2 and 2.6.3 from their environments, downgrade to version 2.6.1, and rotate any potentially exposed credentials across multiple cloud and development platforms, including API keys, tokens, and access credentials. Besides Python, the campaign is evolving beyond Python.

Researchers have confirmed that version 7.0.4 of the intercom-client package within the Node ecosystem has also been compromised, using a preinstall hook to execute credentials-stealing malware. Packagist also has been affected by the attack, where the intercom/intercom-php package (version 5.0.2) has been altered to include a Composer plugin that downloads the Bun runtime using a shell script (setup-intercom.sh) and executes the same obfuscated payload during installation and updates. 

As a result of encryption and exfiltration of stolen data to a remote server endpoint, the campaign's adaptability across ecosystems was further demonstrated. It has been determined that the GitHub account "nhur" has likely been compromised, and that the malicious intercom-client package was published through an automated Continuous Integration workflow triggered by a now-deleted branch of GitHub.

It appears that technical overlap exists among the npm, PyPI, and PHP ecosystems, with similarities in exfiltration techniques based on GitHub, credential targeting patterns, and payload structures. Furthermore, researchers have found similarities between these attacks and previous ones affecting organizations such as Checkmarx, Bitwarden, Telnyx, LiteLLM, and Aqua Security's Trivy, which supports the hypothesis that a single threat actor is responsible. 

Upon suspension from mainstream platforms, TeamPCP reportedly launched an onion-based platform on the dark web to expand its presence. Additionally, the actors have publicly referenced their ties with other cybercriminal groups, including LAPSUS$, while marketing their own tooling infrastructure. 

The developments suggest that the threat landscape is becoming increasingly organized and persistent, with supply chain attacks not just isolated incidents but a broader strategy for infiltrating and monetizing developer ecosystems. Lightning and Intercom compromises remain a stark reminder of the fragility of modern software supply chains as investigations continue. 

In light of the increasingly capable of pivoting across ecosystems and exploiting trusted distribution channels by attackers, organizations operating in cloud-native environments and AI-based environments have become increasingly reliant on robust dependency auditing, real-time monitoring, and rapid incident response. 

The incident highlights a critical juncture in software supply chain security, at which trusted ecosystems are increasingly being weaponised through stealthy, cross-language attack chains that are emerging from across the globe. The coordinated compromises of PyPI, npm, and Packagist packages, together with evidence of maintainer account abuse and automated propagation techniques, demonstrate a high level of operational maturity that challenges traditional methods of detection and response. 

It is now necessary to take proactive measures to guard against threats such as TeamPCP, who have demonstrated their capability to infiltrate developer workflows on a large scale. These include rigorous dependency auditing, tighter access controls, and continuous monitoring of build environments. 

It is imperative to safeguard the integrity of open-source components in order to maintain confidence in modern software development in the present threat landscape.

Are You Letting AI Do Too Much of Your Thinking?

 




As artificial intelligence tools take on a growing share of everyday thinking tasks, researchers are raising concerns that this shift may be quietly affecting how people process information, remember ideas, and engage with their own work.

When Nataliya Kosmyna reviewed applications for internships, she noticed a pattern that stood out. Many cover letters were structured in nearly identical ways, written in polished language, and included vague or forced connections to her research. The consistency suggested that applicants were relying on large language models, the technology behind tools such as ChatGPT, Google Gemini, and Claude.

At the same time, while teaching at the Massachusetts Institute of Technology, Kosmyna began noticing that students were finding it harder to retain what they had learned. Compared to previous years, more students struggled to recall material, which led her to question whether growing dependence on AI tools could be influencing cognitive abilities.

Researchers studying human-computer interaction are increasingly concerned that relying too heavily on AI may alter not just how people write but how they think. This phenomenon, often described as “cognitive offloading,” refers to shifting mental effort onto external tools. While this has existed for years with calculators and search engines, experts warn that AI systems may deepen the effect because they generate complete responses rather than simply helping users find information.

Earlier research on internet usage identified what is known as the “Google effect,” where people became less likely to remember facts because they could easily look them up. Some researchers argued that this allowed the brain to focus on more complex tasks. However, AI tools now go a step further by producing answers, arguments, and even creative content, reducing the need for active thinking.

To better understand the impact, Kosmyna and her team conducted an experiment involving 54 students. Participants were divided into three groups. One group used AI tools to write essays, another relied on search engines without AI-generated summaries, and a third completed the task without any digital assistance. Their brain activity was monitored while they worked on open-ended topics such as happiness, loyalty, and everyday decisions.

The differences were clear. Students who worked without any tools showed strong and widespread brain activity across multiple regions. Those using search engines still demonstrated notable engagement, particularly in areas related to visual processing. In contrast, the group using AI tools showed comparatively lower brain activity, with levels dropping by as much as 55%. Activity in areas linked to creativity and deeper thinking was especially reduced.

The impact extended beyond brain activity. Students who used AI struggled to recall what they had written shortly after completing their essays. Several participants also reported feeling disconnected from their work, as if they had not fully contributed to it. Similar findings from other studies suggest that frequent use of AI tools can weaken memory retention and recall.

Research from the University of Pennsylvania introduces another concern described as “cognitive surrender,” where users accept AI-generated responses without questioning them. In such cases, individuals may rely on the system’s output even when it conflicts with their own understanding.

The effects are not limited to academic settings. A multinational study found that medical professionals who relied on AI tools for detecting colon cancer became less accurate when asked to identify cases without assistance after several months of use. This suggests that repeated dependence on AI may reduce independent decision-making skills, even in critical fields.

Kosmyna also observed that essays written with AI tended to be highly similar, lacking variation in style and depth. Teachers reviewing the work described it as uniform and lacking originality. In some cases, the responses were so alike that it appeared as though students had collaborated, even when they had not.

Follow-up observations months later revealed further differences. Students who had previously relied on AI showed weaker neural connectivity when asked to complete tasks without it, compared to those who had worked independently earlier. This may indicate that they had engaged less deeply with the material from the start.

Vivienne Ming, author of Robot Proof, has raised similar concerns. In her research, students asked to make real-world predictions often defaulted to copying answers from AI systems instead of forming their own conclusions. Brain measurements showed low levels of gamma wave activity, which is associated with active thinking. Reduced gamma activity has been linked in other studies to cognitive decline over time.

However, not all users showed the same pattern. A small group, fewer than 10%, used AI differently by treating it as a source of information rather than a final answer. These individuals analysed the output themselves, showed stronger brain engagement, and produced more accurate results.

The concerns echo earlier findings related to navigation technology. Increased reliance on GPS has been associated with reduced spatial memory in some studies. Weak spatial navigation skills have also been explored as a possible early indicator of conditions such as Alzheimer's disease. These parallels suggest that reduced mental effort over time may have broader cognitive consequences.

Researchers emphasize that AI itself is not the problem but how it is used. Ming advocates for a more deliberate approach, where individuals think through problems first and then use AI to test or refine their ideas. She suggests methods such as asking AI to challenge one’s reasoning or limiting it to providing context instead of direct answers, encouraging deeper engagement.

Kosmyna similarly recommends building a strong understanding of subjects without AI assistance before integrating such tools into the learning process.

The alarming takeaway from the current research is clear. While AI offers efficiency and convenience, it may also encourage mental shortcuts. Human cognition depends on regular effort and engagement, and reducing that effort could carry long-term consequences. As these tools become more integrated into daily life, the challenge will be to use them in ways that support thinking rather than replace it.



eth.limo DNS Hijack Thwarted By DNSSEC After Social Engineering Attack On EasyDNS

 

Unexpectedly, the ENS gateway known as eth.limo revealed a DNS hijack stemming from a social engineering scheme aimed at EasyDNS, its domain provider. Though settings shifted temporarily under unauthorized access, safeguards held firm throughout. Protection layers blocked harm, keeping user activity untouched during the episode. Compromise occurred at the registrar level - yet defenses prevented escalation beyond domain redirection. Hours after the incident started, a person pretending to be part of the eth.limo group tricked EasyDNS support into starting an account reset. 

Because of that mistaken trust, the intruder gained entry and altered where the domain pointed, shifting it first through servers at Cloudflare, then moving again toward Namecheap systems. Right away, automatic warnings went off once those shifts happened, which gave the real eth.limo members time to react fast. Their quick actions reversed the breach soon afterward. A single point of failure in eth.limo allowed it to act like a bridge, routing requests from regular browsers to data hosted on networks such as IPFS, Arweave, and Swarm. Because its DNS setup uses wildcards, countless .eth addresses rely on the same infrastructure - making them vulnerable when one part fails. 

Traffic meant for legitimate decentralized sites might instead flow toward harmful servers under attacker control. Notable resources, even those tied to figures like Vitalik Buterin, faced potential exposure should deception tactics have taken hold. Stopping the damage came down to DNS Security Extensions - called DNSSEC by many. Not through speed, but through verification: it checks DNS replies with digital signatures. Without access to the correct private keys, the hacker's fake entries could not pass these tests. Because validation failed, devices refused the corrupted data, showing failures rather than loading harmful pages. 

Though eth.limo and EasyDNS saw interference, they noted minimal reach due to this layer. To date, no individuals have faced consequences from the attempt. Surprisingly, EasyDNS spoke out after the event, calling it their initial customer-targeted social engineering success in almost thirty years. Following this, improvements to internal procedures are underway. Instead of old methods, eth.limo will shift to a tighter system - one without recovery pathways. That change aims to block repeat incidents. 

Over time, weaker entry points may fade. Security evolves differently now. Most recent cases show similar patterns across decentralized services. Though blockchains themselves stay distributed and protected, the websites people actually visit run on standard domain setups. These entry points open doors hackers are now using more frequently. Instead of breaking encryption, they shift traffic by manipulating DNS records. Users get sent elsewhere without noticing - sometimes losing assets quickly. Security layers matter more than ever, shown clearly by what happened with eth.limo. 

Even when human manipulation tricks succeed, safeguards such as DNSSEC often stop further damage. Because digital dangers keep changing shape, companies - especially in cryptocurrency - now pay closer attention to protecting not just blockchain networks but also the traditional services people rely on to reach them.

Stryker Attack Wipes Thousands of Devices Without Malware

 

Stryker’s latest cyber incident is a stark reminder that attackers do not always need malware to cause major damage. The medical technology company said the breach was confined to its internal Microsoft environment and did not affect its products, including connected and life-saving devices, which remain safe to use. Even so, the attack disrupted business operations and forced customers to place orders manually while electronic ordering systems stayed offline. 

According to the report, the incident was not a ransomware attack, and Stryker emphasized that no malware was deployed on its systems. Instead, the threat actor appears to have used legitimate Microsoft Intune tools to remotely wipe devices after compromising an administrator account and creating a new Global Administrator account. That method made the attack especially dangerous because it relied on trusted enterprise controls rather than suspicious malicious software. 

The scale of the wipe was severe. A source familiar with the attack told BleepingComputer that nearly 80,000 devices were erased between 5:00 and 8:00 a.m. UTC on March 11. Employees across multiple countries reportedly woke up to find company-managed laptops and mobile devices wiped overnight. The group Handala, believed to be linked to Iran, claimed responsibility and said it had destroyed over 200,000 systems and stolen 50 terabytes of data, though investigators did not confirm those claims. 

What makes this case notable is that the attack appears to have used “living off the land” tactics, meaning the intruder abused legitimate administrative access rather than deploying custom code. That approach can be harder to detect because security tools often look for malware signatures or known exploit behavior, not authorized commands executed by a compromised admin account. The result is a fast, high-impact disruption that can spread across a corporate fleet in hours. 

For enterprises, the Stryker case reinforces the need for stronger identity protection, tighter administrator controls, and better monitoring of cloud management platforms. Privileged access should be minimized, account creation should be closely audited, and wipe capabilities should require strong checks before execution. In this incident, the attacker did not need an exploit or a virus; a stolen credential and a legitimate tool were enough to cripple a large organization.

Retailer Secures Website After Customer Data Leak Risk Identified


 

Express has quietly fixed a security flaw that permitted unauthorized access to customer order data following a significant lapse in web application security. This vulnerability exposed sensitive information ranging from customer names, emails, telephone numbers, shipping details, and partial payment data through search engine indexing, which resulted in an inadvertent public disclosure of order confirmation pages through search engine indexing.

There were at least a dozen such records appearing in search results, demonstrating that sequential order identifiers embedded within URLs may be exploited without sophisticated intrusion techniques. In a fraud investigation conducted by an independent security researcher, the issue was uncovered, which highlights how seemingly routine investigations can reveal deeper systemic weaknesses in data handling and access controls. The company was then able to take immediate and corrective measures.

A wide variety of personally identifiable information was disclosed in the exposed records, including customer name, phone number, email address, billing and delivery locations as well as masked payment card information, which was accessible via publicly accessible order confirmation pages. Initially, users could enumerate order records by altering parameters within the web address due to inadequate access controls and predictable URL patterns.

In investigating a suspicious transaction involving a family member, Rey Bango discovered that a simple search query could reveal unrelated customer orders that had previously been indexed by search engines when investigating a suspicious transaction. 

Upon the disclosure of this incident, Express, which is now owned by WHP Global, took steps to remediate the issue. However, the company has not yet clarified whether affected individuals will receive a formal notification. Despite reaffirming the organization's commitment to safeguarding consumer data and encouraging responsible reporting of vulnerabilities, Joe Berean did not outline a structured reporting process for vulnerabilities. 

A number of data exposure incidents have been linked to misconfigured web assets in the past year, reinforcing the persistent gaps in secure development practices as well as the challenges that enterprises must overcome when preventing unintended data leaks at large scales. 

The discovery emerged largely as an accident, resulting from Rey Bango's attempt to validate a potentially fraudulent transaction involving a family member's account after further investigation. In the absence of a clearly defined reporting channel, he escalated the issue by submitting a report in order to ensure prompt resolution. Based on his findings, search engines could surface unrelated records of customers by querying order numbers through indexed confirmation pages coupled with sequential order identifiers. 

As a result of independent verification, minor manipulations of URL parameters enabled the unauthorized access to other users' order histories and personal information, a vulnerability that could be amplified through automated enumeration. After the flaw was disclosed, Express addressed it, but the response evolved to clarify whether the affected customers would be notified and whether forensic logs could be used to determine the extent of unauthorized access. 

The company’s marketing head, Joe Berean, reinforced the company's commitment to data security, but offered limited transparency regarding incident response measures, such as the absence of information about a formal vulnerability disclosure framework or regulatory notification requirements. 

Despite persistent governance gaps, the lack of clarity regarding follow-up compliance, particularly concerning U.S. breach disclosure requirements, highlights these shortcomings. As seen in recent disclosures involving Home Depot and Petco, this episode aligns with a general pattern of exposure incidents that are related to misconfigurations. Because of overlooked security controls, sensitive customer data remains accessible, highlighting the ongoing challenges of enforcing robust web application security. 

The incident illustrates how relatively simple design oversights, such as predictable identifiers and improperly restricted web resources, can quickly morph into large-scale privacy risks, when combined with search engine indexing and absent disclosure mechanisms. 

The company has taken steps to resolve the immediate vulnerability, but the lack of clarity around notification to customers, audit logging, and formal vulnerability intake procedures raises concerns regarding incident readiness and accountability. 

Due to the expansion of digital commerce footprints, the case illustrates the necessity of incorporating secure-by-design principles, in addition to implementing robust access controls and maintaining transparent reporting mechanisms in order to address flaws before they become more serious. 

When these safeguards are not in place, even routine transactional systems can become unintentional points of vulnerability, reinforcing the necessity of continuous security validation throughout the lifecycle of an application.

Researchers Reproduce Anthropic-Style AI Vulnerability Findings Using Public Models at Low Cost

 


New research suggests that the ability to discover software vulnerabilities using artificial intelligence is becoming both inexpensive and widely accessible, raising concerns that advanced cyber capabilities may be spreading faster than anticipated.

A study by Vidoc Security demonstrates that vulnerability discovery techniques similar to those highlighted in Anthropic’s recent “Mythos” work can be reproduced using publicly available AI models. By leveraging GPT-5.4 and Claude Opus 4.6 within an open-source framework called opencode, researchers were able to replicate key findings for under $30 per scan, without access to Anthropic’s internal systems or restricted programs.

Anthropic had earlier positioned its Mythos research as highly sensitive, limiting access to a small group of major organizations and prompting concern across policy and financial circles. Reports indicated that senior figures, including Scott Bessent and Jerome Powell, discussed the implications alongside leading financial executives. The term “vulnpocalypse” resurfaced in cybersecurity discussions, reflecting fears of large-scale AI-driven exploitation.

The Vidoc team sought to test whether such capabilities were truly restricted. Using patched vulnerability examples referenced in Anthropic’s public materials, they examined issues affecting a file-sharing protocol, a security-focused operating system’s networking components, widely used video-processing software, and cryptographic libraries used for identity verification online.

Across three independent runs, both models successfully reproduced two of the documented vulnerability cases each time. Claude Opus 4.6 also independently rediscovered a flaw in OpenBSD in all three attempts, while GPT-5.4 failed to identify that specific issue. In other instances, including vulnerabilities tied to FFmpeg and wolfSSL, the systems correctly identified relevant code regions but did not fully determine the root cause.

The methodology closely mirrored workflows described by Anthropic. Instead of relying on a single prompt, the system first analyzed entire codebases, divided them into smaller segments, and ran parallel detection processes. These processes filtered meaningful signals from noise and cross-checked findings across files. Importantly, the selection of code segments was automated through earlier planning steps, rather than manually guided.

Despite these results, the study underlines a clear distinction. Anthropic’s system reportedly went beyond identifying vulnerabilities by constructing detailed exploit pathways, such as chaining code fragments across multiple network packets to achieve full remote control of a system. The public models, while capable of locating weaknesses, did not reach that level of execution.

According to researcher Dawid Moczadło, this indicates a new turn of events in cybersecurity economics. The most resource-intensive part of the process, identifying credible vulnerability signals, is becoming accessible to anyone with standard API access. However, validating those findings and converting them into reliable security insights or exploit strategies remains significantly more complex.

Anthropic itself has acknowledged that traditional benchmarks like Cybench are no longer sufficient to measure modern AI cyber capabilities, noting that its Mythos system exceeded those standards. The company estimated that comparable capabilities could become widespread within six to eighteen months.

The Vidoc findings suggest that, at least for vulnerability discovery, this transition may already be underway. By publishing their methodology, prompts, and results, the researchers highlight how open tools and commercially available models can replicate parts of workflows once considered highly restricted.

For organizations, the implications are instrumental. As AI reduces the cost and effort required to uncover software flaws, defenders may need to adopt continuous monitoring, faster remediation cycles, and deeper behavioral analysis. The challenge is no longer just identifying vulnerabilities, but managing the scale and speed at which they can now be discovered.

Fake Court Summons And Survey Scams Surge As Regions Bank Warns Of Rising Consumer Fraud Risks

 


Fear remains one of the most powerful tools scammers use, and today’s fraud tactics are evolving to exploit it more effectively than ever. Fake court summons and deceptive online survey scams are now being widely used to trick individuals into revealing sensitive information or making payments. Regions Bank has raised awareness around these threats, emphasizing that such schemes are designed to steal passwords, drain bank accounts, or silently install malware on personal devices. 

One of the more alarming trends involves fraudulent legal notices. Victims may receive messages claiming they missed a court date, failed to pay a toll, or owe a penalty. These alerts often create a sense of urgency, warning of arrest or severe consequences if immediate action is not taken. The goal is to push individuals into reacting quickly without verifying the information. Instead of legitimate resolution channels, these messages direct users to click suspicious links, scan QR codes, or call phone numbers that connect them directly to scammers.  

Although these communications can appear convincing, they often contain clear warning signs. Aggressive or threatening language, demands for immediate payment, and instructions to use unconventional methods such as gift cards or wire transfers are strong indicators of fraud. Genuine legal authorities follow formal processes and provide verifiable documentation, allowing individuals to confirm claims through official sources. Ignoring these red flags can lead to serious financial and data security consequences. Another emerging tactic involves fake CAPTCHA prompts. 

These scams exploit the familiarity of “I’m not a robot” verification tools but introduce unusual instructions, such as pressing specific keyboard shortcuts. What seems like a routine step can actually trigger hidden malicious code, potentially installing malware on the user’s device. Legitimate CAPTCHA systems are simple and never require complex or unexpected actions, making any deviation a likely sign of a scam. Survey scams represent another widespread threat. These schemes lure victims with promises of rewards such as cash, gift cards, or free products. After completing a series of questions, users are told they have “won” and are asked to provide payment details for a small fee. 

In reality, the reward never materializes, and the scammers gain access to valuable financial information. Organizations like the Better Business Bureau have noted a rise in such scams, highlighting unrealistic offers, vague company information, suspicious links, and poor grammar as common warning signs. If individuals encounter these scams, experts recommend deleting the message immediately, avoiding any engagement, and reporting the incident through official platforms such as the Internet Crime Complaint Center. Acting quickly is critical, especially if personal or financial information has already been shared. 

Ultimately, staying vigilant is the most effective defense. Avoid clicking on unknown links, verify information through trusted sources, enable multi-factor authentication, and regularly monitor financial accounts for unusual activity. These scams rely on urgency, fear, and enticing rewards to bypass rational thinking. While tactics continue to evolve, a cautious and informed approach remains the strongest way to protect against fraud in an increasingly digital environment.

Bank of America Bets Big on Risky Anthropic AI

 

Bank of America is aggressively expanding its use of Anthropic's advanced AI technology, even as U.S. regulators issue stark cybersecurity warnings. The bank's commitment highlights a broader trend where nearly 70% of financial institutions integrate AI into operations, prioritizing innovation over potential risks. This move comes amid global concerns about Anthropic's Claude Mythos Preview model, which has detected thousands of high-severity vulnerabilities in major operating systems and browsers. 

In early April 2026, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell urgently met with CEOs from top U.S. banks, including Bank of America, to flag risks from Mythos. Officials warned that deploying the model could expose customer personal data to cyber threats, prompting Anthropic to limit access to a select group of tech and banking experts. World leaders echoed these fears: Bank of England Governor Andrew Bailey called AI a "very serious challenge," while ECB President Christine Lagarde supported restrictions on the technology. 

Anthropic itself has cautioned about the dangers, stating that rapid AI progress could spread powerful vulnerability-detection capabilities to unsafe actors, with severe fallout for economies and national security. Despite this, banks like JPMorgan, Goldman Sachs, Citigroup, and Bank of America are testing Mythos to bolster their own defenses. Canadian regulators and European counterparts have also raised alarms, underscoring the technology's global implications. 

Bank of America leads in AI adoption, with over 90% of its 200,000+ employees using the tools daily and a client-facing AI assistant logging three billion interactions in 2025 alone. Backed by a $13.5 billion tech budget—including $4 billion for AI initiatives—the bank focuses on end-to-end process transformation to boost revenue, client experience, and efficiency. Recent rollouts include an AI tool for financial advisors to identify prospects and summarize meetings. 

Bank of America's CTO Hari Gopalkrishnan emphasized balancing scale with governance at the Semafor World Economy 2026 summit, noting, "If you overdo it, you stall innovation. If you underdo it, you introduce a lot of risk." The strategy shifts from small proofs-of-concept to large-scale applications, aiming for measurable ROI while navigating regulatory scrutiny. As AI reshapes banking, Bank of America's bold push tests the fine line between opportunity and peril.

Hackers Use Hidden QEMU Linux VMs to Evade Windows Security and Launch Stealth Attacks

 

Cybersecurity experts have uncovered a stealthy tactic where attackers bypass Windows defenses by running concealed Linux virtual machines using QEMU. Researchers warn that these hidden environments allow threat actors to maintain persistent access, steal sensitive data, and even deploy ransomware.

Earlier findings highlighted how Russian-linked groups exploited Microsoft Hyper-V to install covert Linux virtual machines on targeted systems. However, because enterprise environments typically restrict or closely monitor Hyper-V, attackers have shifted to less scrutinized alternatives.

Security firm Sophos reports active misuse of QEMU, which enables attackers to operate a full Linux system within a Windows host. Activities carried out inside these virtual machines are largely undetectable by endpoint protection tools such as Windows Defender.

“Rather than deploying a pre-built toolkit, the attackers manually install and compile their full attack suite within the VM, including Impacket, KrbRelayx, Coercer, BloodHound.py, NetExec, Kerbrute, Metasploit, and supporting libraries for Python, Rust, Ruby, and C++,” Sophos said in a report detailing active exploitation campaigns.

Attackers frequently rely on Alpine Linux, particularly version 3.22.0, due to its minimal size and low resource consumption. This allows the malicious VM to operate with almost no visible impact on the host system.

Once their objectives are achieved, attackers can simply shut down the VM, erase its image, and disappear without leaving significant traces.

“Attackers are drawn to QEMU and more common hypervisor-based virtualization tools like Hyper-V, VirtualBox, and VMware,” Sophos researchers said.

“Malicious activity within a virtual machine (VM) is essentially invisible to endpoint security controls and leaves little forensic evidence on the host itself.”

One group leveraging this technique is linked to the PayoutsKing ransomware campaign and tracked as STAC4713. In observed cases, attackers used QEMU to establish covert reverse SSH backdoors, enabling them to deploy additional malicious payloads.

Even though a basic QEMU setup can run without administrative privileges, attackers often escalate access by launching VMs under a SYSTEM account via scheduled tasks. They disguise virtual disk files as innocuous items like “vault.db” and later shift to obscure DLL filenames such as “birsv.dll.”

Through these hidden VMs, attackers create reverse SSH tunnels to remote servers, granting full control over compromised systems. They also exploit built-in Windows applications like Paint, Notepad, and Edge to explore network shares and access files.

Another threat actor, identified as STAC3725, deployed a QEMU-based VM in February to conduct credential harvesting and system reconnaissance. This setup enabled activities such as Kerberos enumeration, Active Directory mapping, and even running FTP servers for staging malware or exfiltrating data.

“The abuse of QEMU represents a growing evasion trend where threat actors leverage legitimate virtualization software to conceal malicious actions from endpoint protection agents and audit logs,” Sophos warns.

“A hidden VM with a pre-loaded or compiled attack toolkit can enable a threat actor to have long-term access to a network, providing the ability to deploy malware, harvest credentials, and move laterally without leaving evidence on the host itself.”

To mitigate such risks, researchers advise IT teams to regularly audit systems for unexpected QEMU installations and suspicious scheduled tasks, especially those running under SYSTEM-level privileges. Indicators of compromise may include unusual SSH port forwarding (particularly port 22), outbound SSH connections from uncommon ports, and virtual disk files with atypical extensions such as .db, .dll, or .qcow2.