Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Ransomware Campaign Leverages QEMU to Slip Past Enterprise Defences


 

In an effort to circumvent traditional security controls, hackers are increasingly relying on virtualisation as a covert execution layer, embedding malicious operations within QEMU environments. As observed in observed incidents, adversaries deployed concealed virtual machines in which tooling and command execution occurred largely beyond the detection range of endpoint detection systems, leaving minimal forensic artifacts on the operating system. 

In most cases, these environments are introduced as virtual disk images disguised under atypical file extensions such as .db or .dll and triggered by scheduled tasks with SYSTEM level privileges to create a parallel runtime that blends with legitimate processes.

According to analysts at Sophos, such techniques take advantage of the trust associated with widely used virtualization software. This pattern extends to platforms such as Microsoft Hyper-V, Oracle VM VirtualBox, and VMware, among others. These tactics reflect a broader strategic shift in which legitimate infrastructure is used to create isolated, low-noise environments that allow ransomware deployment while retaining effective anonymity to host-based defenses. Based on this pattern, researchers at Sophos have highlighted that QEMU misuse is not a recent development, but its resurgence in recent operations signals a renewed tactical emphasis on the use of QEMU. 

In late 2025, analysts have identified two separate ransomware campaigns, STAC4713 and STAC3725, which use virtualised environments to avoid detection, and STAC4713 is specifically associated with the financial-motivated PayoutsKing cluster of ransomware activities. 

An attacker established persistence for this campaign by creating a scheduled task, “TPMProfiler,” which executed a concealed virtual machine with SYSTEM-level privileges. A disk image deployment was implemented in which benign assets were deliberately disguised as benign assets, initially appearing as database files, but later taking on the appearance of dynamic link libraries in order to blend seamlessly into routine system artifacts. 

Once active, the virtual instance initiated reverse SSH tunneling mechanisms and port-forwarding mechanisms, forming covert communication channels that enabled sustained remote access while remaining outside the scope of conventional monitoring tools. 

During this isolated Alpine Linux environment, adversaries employed a compact toolkit that enabled tunneling, obfuscation, and data exfiltration, facilitating credential harvesting, the extraction of Active Directory databases, as well as the lateral exploration of network shares, all by utilizing legitimate system utilities. 

By integrating trusted binaries and hidden virtual infrastructure, this intentional convergence highlights a refined intrusion model where malicious activity is woven into normal system behavior, increasing the difficulty of detecting and responding to intrusions. 

A further investigation of STAC4713 has revealed its origin dates are November 2025, when it has been associated with the GOLD ENCOUNTER threat group and directly associated with PayoutsKing ransomware, a ransomware operation that differs from the conventional ransomware-as-a-service environment by executing intrusions without the assistance of affiliates. 

After emergence in mid-2025, the group has demonstrated a focus on hypervisor-centric environments, developing customized encryption tools for platforms such as VMware and VMware ESXi, signaling a deliberate shift towards infrastructure-level disruption. 

Additionally, a second campaign, STAC3725, appeared in February 2026. This campaign accessed the system via the CVE-2025-5777 exploit chain before deploying a malicious instance of ConnectWise ScreenConnect to secure persistence. Using this foothold, attackers orchestrated credential harvesting against Active Directory environments using a concealed QEMU virtual machine. 

The intrusion sequence in STAC4713 is well-planned, beginning with the creation of the “TPMProfiler” scheduled task which executes qemu-system-x86_64.exe with SYSTEM privileges. As a result, the boot-up of a virtual hard drive image disguised as benign files  initially "vault.db" and later renamed "bisrv.dll" -- was used to evade scrutiny.

In addition to this obfuscation, network manipulation techniques are employed, including port forwarding from non-standard ports such as 32567 and 22022 to SSH port 22, while reverse tunnels involving AdaptixC2 or OpenSSH are used to maintain persistent and covert connectivity to attacker-controlled networks. Embedded virtual machines operate on Alpine Linux 3.22.0 images preconfigured to offer a compact but robust toolkit that enables the rapid transfer of data and execution of commands. 

The toolkit includes Linker2, AdaptixC2, WireGuard's WireGuard Obfuscation Layer (wg-obfuscator), BusyBox, Chisel, and Rclone. In contrast, STAC3725 utilizes a more adaptive approach, compiling its toolset within a virtual environment in situ, including frameworks such as Impacket, KrbRelayX, Coercer, BloodHound.py, NetExec, Kerbrute, and Metasploit, as well as Python, Rust, Ruby, and C dependencies. 

Post-compromise activities include credential extraction, Kerberos user enumeration via Kerbrute, Active Directory reconnaissance via BloodHound, and payload staging over FTP channels, demonstrating a methodical and deeply embedded attack model in which virtualization serves not only as a concealment mechanism, but also as a platform for sustained intrusion. 

In sum, STAC4713 and STAC3725's activity indicate a calculated evolution in adversary tradecraft where virtualisation is no longer just a peripheral tactic for evasion but rather a critical component of adversary operations. A malicious workflow may be embedded within QEMU instances and aligned with trusted system processes, thus decoupling attackers' activities from the host environment. 

As a result, conventional endpoint controls will be unable to detect the attacker's activities while maintaining persistent, low-noise access. By employing disguised storage artifacts, executing tasks at the SYSTEM level, and utilizing encrypted communication channels, a disciplined approach to stealth is demonstrated, while the integration of credential harvesting, Active Directory reconnaissance, and lateral movement capabilities highlights the end-to-end nature of the intrusion. 

Sophos has observed that the resurgence of such campaigns indicates a broader industry challenge, in which legitimate infrastructure and administrative tools are increasingly repurposed to undermine defensive assumptions. 

Virtualised attack frameworks, with their convergence of concealment, persistence, and operational depth, provide a formidable vector for modern ransomware operations, requiring an extension of detection strategies beyond the host to virtual layers where adversaries are actively exploiting these vulnerabilities.

North Korea-Linked Hackers Target Crypto Platforms, $500M Stolen

 



Cybersecurity researchers are raising alarms over a developing pattern of cryptocurrency thefts linked to North Korean actors, with recent incidents suggesting a move from isolated breaches to a sustained and structured campaign. In a span of just over two weeks, attacks targeting the Drift trading platform and the Kelp protocol resulted in losses exceeding $500 million, pointing to a level of coordination that goes beyond opportunistic hacking.

What initially appeared to be separate security failures is now being viewed as part of a broader operational strategy, likely driven by the financial pressures faced by a heavily sanctioned state. Shortly after attackers used social engineering techniques to compromise Drift, another incident emerged involving Kelp, a restaking protocol integrated with cross-chain infrastructure.

The Kelp breach surfaces a noticeable turn in attacker behavior. Rather than exploiting traditional software bugs or stealing credentials, the attackers targeted fundamental design assumptions within decentralized systems. When examined together, both incidents indicate a deliberate escalation in efforts to extract value from the crypto ecosystem.

Alexander Urbelis of ENS Labs described the pattern as systematic rather than incidental, noting that the frequency and timing of these events resemble an operational cycle. He warned that reactive fixes alone are insufficient against threats that follow a structured tempo.


Breakdown of the Kelp exploit

Unlike many traditional cyberattacks, the Kelp incident did not involve bypassing encryption or stealing private keys. Instead, the system behaved as designed, but was fed manipulated data. Attackers altered the inputs that the protocol relied on, causing it to validate transactions that never actually occurred.

Urbelis explained that while cryptographic signatures can verify the origin of a message, they do not ensure the truthfulness of the information being transmitted. In simple terms, the system confirmed who sent the data, but failed to verify whether the data itself was accurate.

David Schwed of SVRN reinforced this view, stating that the exploit was not based on breaking cryptography, but on taking advantage of how the system had been configured.

A central weakness was Kelp’s dependence on a single verifier to validate cross-chain messages. While this approach improves efficiency and simplifies deployment, it removes an essential layer of security redundancy. In response, LayerZero has advised projects to adopt multiple independent verifiers, similar to requiring multiple approvals in traditional financial systems.

However, this recommendation has sparked criticism. Some experts argue that if a configuration is known to be unsafe, it should not be offered as a default option. Relying on users to manually implement secure settings, especially in complex environments, increases the likelihood of misconfiguration.


Contagion across interconnected systems

The impact of the Kelp exploit did not remain confined to a single platform. Decentralized finance systems are deeply interconnected, with assets frequently reused across multiple protocols. This creates a chain of dependencies, where a failure in one component can propagate across others.

Schwed described these assets as interconnected obligations, emphasizing that the strength of the system depends on each individual link. In this case, lending platforms such as Aave, which accepted the affected assets as collateral, experienced financial strain. This transformed an isolated breach into a broader ecosystem-level disruption.


Reassessing decentralization claims

The incident also exposes a disconnect between how decentralization is promoted and how systems actually function. A structure that relies on a single point of verification cannot be considered fully decentralized, despite being marketed as such.

Urbelis expanded on this by noting that decentralization is not an inherent feature, but the result of specific design decisions. Weaknesses often emerge in less visible layers, such as data validation or infrastructure components, which are increasingly becoming primary targets for attackers.

The activity aligns with a bigger change in strategy by groups such as Lazarus Group. Instead of focusing only on exchanges or obvious coding flaws, attackers are now targeting foundational infrastructure, including cross-chain bridges and restaking mechanisms.

These components play a critical role in enabling asset movement and reuse across blockchain networks. Their complexity, combined with the large volumes of value they handle, makes them particularly attractive targets.

Earlier waves of crypto-related attacks often focused on centralized platforms or easily identifiable vulnerabilities. In contrast, current operations are increasingly directed at the underlying systems that connect the ecosystem, which are harder to monitor and more prone to configuration errors.

Importantly, the Kelp exploit did not introduce a new category of vulnerability. Instead, it demonstrated how existing weaknesses remain exploitable when not properly addressed. The incident underscores a recurring issue in the industry: security measures are often treated as optional guidelines rather than mandatory requirements.

As attackers continue to enhance their methods and increase the pace of operations, this gap becomes easier to exploit and more costly for organizations. The growing sophistication of these campaigns suggests that the primary risk may not lie in unknown flaws, but in the failure to consistently address well-understood security challenges.

Terms And Conditions Grow Harder To Read As Platforms Limit Users’ Legal Rights Study Finds

 

Most people click "agree" without looking - yet those agreements keep getting harder to understand. Complexity rises, researchers note, just as user protections shrink. From Cambridge, a recent study points out expanded corporate access to personal information. Legal barriers grow tougher, making it more difficult to take firms to court. Lengthy clauses quietly reshape power, favoring businesses over individuals. Beginning with a project called the Transparency Hub, results emerge from systematic tracking of legal texts across 300-plus online platforms. 

Stored within it: twenty thousand iterations - past and present - of service conditions and privacy notices from apps like TikTok, among others. Over months, changes in wording reveal shifts in corporate approaches to personal information. What users agree to today may differ subtly from last year’s version, now preserved here. Visibility grows when updates accumulate, showing patterns once hidden beneath routine acceptance clicks. Surprisingly clear trends show a steady drop in how easily people can read service contracts. 

From 2016 to 2025, studies applying the Flesch-Kincaid method reveal nearly 86 percent demand skills typical of university readers. Because of this shift, grasping the full meaning behind digital consent has grown harder for most individuals. While signing up seems routine, the depth of understanding often lags behind. Away from mere complexity, attention turns to changing corporate approaches in handling disagreements. While once settled in open courtrooms, conflict resolution now leans on closed-door arbitration imposed by platform rules. 

A third-party referee reaches final judgments, yet clarity tends to fade behind closed processes. Users find their options shrinking when collective lawsuits are blocked. Even mediator choices sometimes rest with the businesses involved, quietly shaping outcomes. Newer artificial intelligence platforms like Anthropic and Perplexity AI also follow this pattern, embedding clauses that block participation in group litigation. Because of this, anyone feeling wronged has to file a personal claim - often pricier and weaker than joining others in court. A few companies allow narrow chances to decline the clause; however, acting fast after registration is usually required. 

Now appearing, this study arrives as officials across Europe weigh tighter rules for online services, focusing on effects tied to youth engagement. With France leading examples, followed by Spain, Portugal, and Denmark, governments test new steps aimed at tackling unease around digital privacy and web-based risks. One thing stands out: laws around online services are drifting further from what everyday users can grasp. 

Though written rules get longer and tighter, people must now sort through fine print that defines their digital freedoms - frequently unaware of what they’re agreeing to. While clarity lags behind complexity, personal responsibility quietly expands.

Lazarus Hackers Steal $290M from KelpDAO in Cross-Chain Exploit

 

KelpDAO has become the latest DeFi project to face a major security crisis after a $290 million heist that investigators say is likely tied to North Korea’s Lazarus Group. The attack targeted rsETH, a restaked ether asset used across several protocols, and drained about 116,500 tokens in a matter of hours. What makes the incident alarming is that the exploit did not appear to rely on a typical smart-contract flaw. Instead, it seems to have abused the project’s cross-chain verification setup, showing how a vulnerability in infrastructure can be just as damaging as a bug in code. 

According to the project’s public statement, KelpDAO detected suspicious cross-chain activity involving rsETH on April 18, 2026, and quickly paused rsETH contracts across Ethereum mainnet and Layer 2 networks. The team said it was working with LayerZero, Unichain, and other partners to investigate the breach and contain the damage. On-chain activity later showed that the stolen funds were moved through Tornado Cash, a common laundering route used to hide crypto theft. 

LayerZero’s early findings suggest the attack was highly coordinated. Researchers believe the hackers compromised RPC nodes and then used a DDoS campaign to force the system into failing over to poisoned infrastructure, where fraudulent cross-chain messages could be accepted as legitimate. In other words, the attackers appear to have tricked the bridge layer into believing a transfer had been properly authorized. That design weakness, rather than the asset itself, seems to have opened the door to the theft. 

The impact propagated quickly beyond KelpDAO. Because rsETH is accepted as collateral in lending markets, the exploit created risk for other DeFi platforms, including Compound, Euler, and Aave. Aave responded by freezing and blocking new deposits or borrowing using rsETH collateral. The wider market reaction highlights how one compromised bridge can ripple across multiple protocols, creating uncertainty far beyond the original target. 

The KelpDAO incident is another reminder that DeFi security depends not only on smart-contract audits, but also on the trust assumptions behind cross-chain systems. As protocols grow more interconnected, attackers need only find one weak link to trigger losses on a massive scale. For users and developers alike, the lesson is clear: layered security, diversified verification, and conservative bridge design are no longer optional.

PyTorch Lightning and Intercom Client Users Exposed to Credential Stealing Campaign


 

Python's software supply chain has been compromised, which targeted the popular PyPI package Lightning and exposed downstream machine learning environments to covert credential theft through a sophisticated software supply chain compromise. 

In conjunction with Aikido Security, OX Security, Socket, and StepSecurity researchers, versions 2.6.2 and 2.6.3, both published on April 30, 2026, have been modified maliciously as part of a broader intrusion related to the "Mini Shai-Hulud" campaign. 

A day earlier, the attack emerged through compromised SAP-related npm packages, underlining an ongoing trend of coordinated cross-ecosystem supply chain threats targeting high-value development environments. As a result of this compromise, organizations that utilize PyTorch Lightning, an open-source abstraction layer over PyTorch with over 31,000 stars on Github, face significant risk. 

In addition to being frequently embedded in dependency trees facilitating image classification, fine-tuning of large language models, diffusion workloads, and forecasting, Lightning's ubiquity increased the scope of the attack. 

A standard pip install lightning command was sufficient for the activation of the malicious chain exploitation did not require a sophisticated trigger. Upon installation of the compromised package, a hidden _runtime directory containing obfuscated JavaScript was created and executed automatically upon module import. This behavior was embedded within the package's initialization logic, ensuring that no additional user interaction was required to execute the script. 

Upon receiving the payload, a Python script (start.py) downloaded the Bun JavaScript runtime from external sources, followed by an 11 MB obfuscated file (router_runtime.js) which carried out the attack sequence in stages. An execution model utilizing JavaScript within a Python package utilizing cross-language JavaScript marks a significant evolution in attacker tradecraft. This complicates detection mechanisms focusing on single-language threats.

The malware's primary objective was credential harvesting. Analysis indicates that the malware targeted GitHub tokens, cloud service credentials spanning Amazon Web Services (AWS), Google Cloud Platform (GCP), and Azure, SSH keys, NPM tokens, Kubernetes configurations, Docker credentials, and environment variables systematically. Moreover, it was also capable of accessing cryptocurrency wallets and developer secrets stored within local and continuous integration/continuous delivery environments. 

By exploiting compromised credentials, stolen data was exfiltrated, often by automating commits to attacker-controlled GitHub repositories, which effectively concealed malicious activity within legitimate developer workflows, effectively masking malicious activity. There were distinctive markers that linked the campaign to the "Shai-Hulud" identity. 

Infected environments were observed creating public repositories with unusual naming conventions, including EveryBoiWeBuildIsaWormBoi and descriptions such as "A Mini Shai-Hulud has appeared." Attackers seem to be able to track compromised systems using these artifacts both as infection indicators and as signalling mechanisms. 

An effort has been made to link the activity to a financial motivated threat group referred to as TeamPCP, who has consistently demonstrated a focus on credential-rich development environments. According to OX Security, approximately 8.3 million downloads are likely to have been exposed as a result of the incident. 

As a result of the attack, Intercom-Client was compromised on the same day, further demonstrating the coordinated nature of the campaign. These incidents are the culmination of a series of supply chain breaches affecting npm, PyPI, and Docker Hub occurring between April 21 and 23 that suggest that a deliberate and sustained effort was made to infiltrate widely trusted software distribution channels between April 21 and 23.

The router_runtime.js payload was further examined in order to uncover extensive obfuscation and a clear focus on credential access and repository manipulation. Approximately 700 references were found to process and environment variables, over 460 references were identified to authentication tokens, and approximately 330 references were found to code repositories. 

Shai-Hulud operations are closely related to these patterns, which emphasize code reuse and iterative refinement of attack techniques. Furthermore, the payload was also capable of poisoning GitHub repositories and propagating through npm packages, raising concerns about secondary infection vectors beyond data exfiltration. 

The Lightning-AI GitHub repository became aware of the compromise when a user reported suspicious behavior under issue #21689 titled “Possible supply chain attack on version 2.6.3.” The report described a hidden execution chain that involved downloading the Bun runtime and executing a large obfuscated payload during module import. Despite this, the issue was later closed without clarification, thereby creating uncertainty concerning the project's initial response to the matter. 

Following Socket's disclosure in the Lightning-AI/pytorch-lightning repository, an even more unusual outcome occurred. In a matter of seconds, an account identified as pl-ghost closed the issue warning about compromised versions, and then posted a meme entitled "SILENCE DEVELOPER." This behavior has raised immediate concerns about potential account compromise since it was seen as anomalous. 

It was discovered that additional suspicious activity was related to the same account, including six rapid branch creations and deletions across multiple repositories within approximately 70 minutes, which were associated with this account. Several of these branches followed random 10-character lowercase naming conventions, which is consistent with the behavior of the Shai-Hulud worm, which probes for write access. 

As well as the branch impersonating Dependabot, another contained inconsistencies such as a misspelled identifier and incorrect naming structure, and all branches were deleted within seconds of being created, and none of them triggered workflows, indicating that automated probing was not being used in development. This combined evidence strongly suggests that the maintainer account may have been compromised, possibly using the same stolen credentials that enabled the malicious package publication on PyPI to be published. 

Upon learning of the incident, Python Package Index administrators quarantined Lightning versions that may have been affected. According to the maintainers, an investigation is underway in order to determine the cause, as the compromised releases introduced functionality that was consistent with credential harvesting methods. 

In the meantime, it is highly recommended that developers remove versions 2.6.2 and 2.6.3 from their environments, downgrade to version 2.6.1, and rotate any potentially exposed credentials across multiple cloud and development platforms, including API keys, tokens, and access credentials. Besides Python, the campaign is evolving beyond Python.

Researchers have confirmed that version 7.0.4 of the intercom-client package within the Node ecosystem has also been compromised, using a preinstall hook to execute credentials-stealing malware. Packagist also has been affected by the attack, where the intercom/intercom-php package (version 5.0.2) has been altered to include a Composer plugin that downloads the Bun runtime using a shell script (setup-intercom.sh) and executes the same obfuscated payload during installation and updates. 

As a result of encryption and exfiltration of stolen data to a remote server endpoint, the campaign's adaptability across ecosystems was further demonstrated. It has been determined that the GitHub account "nhur" has likely been compromised, and that the malicious intercom-client package was published through an automated Continuous Integration workflow triggered by a now-deleted branch of GitHub.

It appears that technical overlap exists among the npm, PyPI, and PHP ecosystems, with similarities in exfiltration techniques based on GitHub, credential targeting patterns, and payload structures. Furthermore, researchers have found similarities between these attacks and previous ones affecting organizations such as Checkmarx, Bitwarden, Telnyx, LiteLLM, and Aqua Security's Trivy, which supports the hypothesis that a single threat actor is responsible. 

Upon suspension from mainstream platforms, TeamPCP reportedly launched an onion-based platform on the dark web to expand its presence. Additionally, the actors have publicly referenced their ties with other cybercriminal groups, including LAPSUS$, while marketing their own tooling infrastructure. 

The developments suggest that the threat landscape is becoming increasingly organized and persistent, with supply chain attacks not just isolated incidents but a broader strategy for infiltrating and monetizing developer ecosystems. Lightning and Intercom compromises remain a stark reminder of the fragility of modern software supply chains as investigations continue. 

In light of the increasingly capable of pivoting across ecosystems and exploiting trusted distribution channels by attackers, organizations operating in cloud-native environments and AI-based environments have become increasingly reliant on robust dependency auditing, real-time monitoring, and rapid incident response. 

The incident highlights a critical juncture in software supply chain security, at which trusted ecosystems are increasingly being weaponised through stealthy, cross-language attack chains that are emerging from across the globe. The coordinated compromises of PyPI, npm, and Packagist packages, together with evidence of maintainer account abuse and automated propagation techniques, demonstrate a high level of operational maturity that challenges traditional methods of detection and response. 

It is now necessary to take proactive measures to guard against threats such as TeamPCP, who have demonstrated their capability to infiltrate developer workflows on a large scale. These include rigorous dependency auditing, tighter access controls, and continuous monitoring of build environments. 

It is imperative to safeguard the integrity of open-source components in order to maintain confidence in modern software development in the present threat landscape.

Are You Letting AI Do Too Much of Your Thinking?

 




As artificial intelligence tools take on a growing share of everyday thinking tasks, researchers are raising concerns that this shift may be quietly affecting how people process information, remember ideas, and engage with their own work.

When Nataliya Kosmyna reviewed applications for internships, she noticed a pattern that stood out. Many cover letters were structured in nearly identical ways, written in polished language, and included vague or forced connections to her research. The consistency suggested that applicants were relying on large language models, the technology behind tools such as ChatGPT, Google Gemini, and Claude.

At the same time, while teaching at the Massachusetts Institute of Technology, Kosmyna began noticing that students were finding it harder to retain what they had learned. Compared to previous years, more students struggled to recall material, which led her to question whether growing dependence on AI tools could be influencing cognitive abilities.

Researchers studying human-computer interaction are increasingly concerned that relying too heavily on AI may alter not just how people write but how they think. This phenomenon, often described as “cognitive offloading,” refers to shifting mental effort onto external tools. While this has existed for years with calculators and search engines, experts warn that AI systems may deepen the effect because they generate complete responses rather than simply helping users find information.

Earlier research on internet usage identified what is known as the “Google effect,” where people became less likely to remember facts because they could easily look them up. Some researchers argued that this allowed the brain to focus on more complex tasks. However, AI tools now go a step further by producing answers, arguments, and even creative content, reducing the need for active thinking.

To better understand the impact, Kosmyna and her team conducted an experiment involving 54 students. Participants were divided into three groups. One group used AI tools to write essays, another relied on search engines without AI-generated summaries, and a third completed the task without any digital assistance. Their brain activity was monitored while they worked on open-ended topics such as happiness, loyalty, and everyday decisions.

The differences were clear. Students who worked without any tools showed strong and widespread brain activity across multiple regions. Those using search engines still demonstrated notable engagement, particularly in areas related to visual processing. In contrast, the group using AI tools showed comparatively lower brain activity, with levels dropping by as much as 55%. Activity in areas linked to creativity and deeper thinking was especially reduced.

The impact extended beyond brain activity. Students who used AI struggled to recall what they had written shortly after completing their essays. Several participants also reported feeling disconnected from their work, as if they had not fully contributed to it. Similar findings from other studies suggest that frequent use of AI tools can weaken memory retention and recall.

Research from the University of Pennsylvania introduces another concern described as “cognitive surrender,” where users accept AI-generated responses without questioning them. In such cases, individuals may rely on the system’s output even when it conflicts with their own understanding.

The effects are not limited to academic settings. A multinational study found that medical professionals who relied on AI tools for detecting colon cancer became less accurate when asked to identify cases without assistance after several months of use. This suggests that repeated dependence on AI may reduce independent decision-making skills, even in critical fields.

Kosmyna also observed that essays written with AI tended to be highly similar, lacking variation in style and depth. Teachers reviewing the work described it as uniform and lacking originality. In some cases, the responses were so alike that it appeared as though students had collaborated, even when they had not.

Follow-up observations months later revealed further differences. Students who had previously relied on AI showed weaker neural connectivity when asked to complete tasks without it, compared to those who had worked independently earlier. This may indicate that they had engaged less deeply with the material from the start.

Vivienne Ming, author of Robot Proof, has raised similar concerns. In her research, students asked to make real-world predictions often defaulted to copying answers from AI systems instead of forming their own conclusions. Brain measurements showed low levels of gamma wave activity, which is associated with active thinking. Reduced gamma activity has been linked in other studies to cognitive decline over time.

However, not all users showed the same pattern. A small group, fewer than 10%, used AI differently by treating it as a source of information rather than a final answer. These individuals analysed the output themselves, showed stronger brain engagement, and produced more accurate results.

The concerns echo earlier findings related to navigation technology. Increased reliance on GPS has been associated with reduced spatial memory in some studies. Weak spatial navigation skills have also been explored as a possible early indicator of conditions such as Alzheimer's disease. These parallels suggest that reduced mental effort over time may have broader cognitive consequences.

Researchers emphasize that AI itself is not the problem but how it is used. Ming advocates for a more deliberate approach, where individuals think through problems first and then use AI to test or refine their ideas. She suggests methods such as asking AI to challenge one’s reasoning or limiting it to providing context instead of direct answers, encouraging deeper engagement.

Kosmyna similarly recommends building a strong understanding of subjects without AI assistance before integrating such tools into the learning process.

The alarming takeaway from the current research is clear. While AI offers efficiency and convenience, it may also encourage mental shortcuts. Human cognition depends on regular effort and engagement, and reducing that effort could carry long-term consequences. As these tools become more integrated into daily life, the challenge will be to use them in ways that support thinking rather than replace it.



eth.limo DNS Hijack Thwarted By DNSSEC After Social Engineering Attack On EasyDNS

 

Unexpectedly, the ENS gateway known as eth.limo revealed a DNS hijack stemming from a social engineering scheme aimed at EasyDNS, its domain provider. Though settings shifted temporarily under unauthorized access, safeguards held firm throughout. Protection layers blocked harm, keeping user activity untouched during the episode. Compromise occurred at the registrar level - yet defenses prevented escalation beyond domain redirection. Hours after the incident started, a person pretending to be part of the eth.limo group tricked EasyDNS support into starting an account reset. 

Because of that mistaken trust, the intruder gained entry and altered where the domain pointed, shifting it first through servers at Cloudflare, then moving again toward Namecheap systems. Right away, automatic warnings went off once those shifts happened, which gave the real eth.limo members time to react fast. Their quick actions reversed the breach soon afterward. A single point of failure in eth.limo allowed it to act like a bridge, routing requests from regular browsers to data hosted on networks such as IPFS, Arweave, and Swarm. Because its DNS setup uses wildcards, countless .eth addresses rely on the same infrastructure - making them vulnerable when one part fails. 

Traffic meant for legitimate decentralized sites might instead flow toward harmful servers under attacker control. Notable resources, even those tied to figures like Vitalik Buterin, faced potential exposure should deception tactics have taken hold. Stopping the damage came down to DNS Security Extensions - called DNSSEC by many. Not through speed, but through verification: it checks DNS replies with digital signatures. Without access to the correct private keys, the hacker's fake entries could not pass these tests. Because validation failed, devices refused the corrupted data, showing failures rather than loading harmful pages. 

Though eth.limo and EasyDNS saw interference, they noted minimal reach due to this layer. To date, no individuals have faced consequences from the attempt. Surprisingly, EasyDNS spoke out after the event, calling it their initial customer-targeted social engineering success in almost thirty years. Following this, improvements to internal procedures are underway. Instead of old methods, eth.limo will shift to a tighter system - one without recovery pathways. That change aims to block repeat incidents. 

Over time, weaker entry points may fade. Security evolves differently now. Most recent cases show similar patterns across decentralized services. Though blockchains themselves stay distributed and protected, the websites people actually visit run on standard domain setups. These entry points open doors hackers are now using more frequently. Instead of breaking encryption, they shift traffic by manipulating DNS records. Users get sent elsewhere without noticing - sometimes losing assets quickly. Security layers matter more than ever, shown clearly by what happened with eth.limo. 

Even when human manipulation tricks succeed, safeguards such as DNSSEC often stop further damage. Because digital dangers keep changing shape, companies - especially in cryptocurrency - now pay closer attention to protecting not just blockchain networks but also the traditional services people rely on to reach them.

Stryker Attack Wipes Thousands of Devices Without Malware

 

Stryker’s latest cyber incident is a stark reminder that attackers do not always need malware to cause major damage. The medical technology company said the breach was confined to its internal Microsoft environment and did not affect its products, including connected and life-saving devices, which remain safe to use. Even so, the attack disrupted business operations and forced customers to place orders manually while electronic ordering systems stayed offline. 

According to the report, the incident was not a ransomware attack, and Stryker emphasized that no malware was deployed on its systems. Instead, the threat actor appears to have used legitimate Microsoft Intune tools to remotely wipe devices after compromising an administrator account and creating a new Global Administrator account. That method made the attack especially dangerous because it relied on trusted enterprise controls rather than suspicious malicious software. 

The scale of the wipe was severe. A source familiar with the attack told BleepingComputer that nearly 80,000 devices were erased between 5:00 and 8:00 a.m. UTC on March 11. Employees across multiple countries reportedly woke up to find company-managed laptops and mobile devices wiped overnight. The group Handala, believed to be linked to Iran, claimed responsibility and said it had destroyed over 200,000 systems and stolen 50 terabytes of data, though investigators did not confirm those claims. 

What makes this case notable is that the attack appears to have used “living off the land” tactics, meaning the intruder abused legitimate administrative access rather than deploying custom code. That approach can be harder to detect because security tools often look for malware signatures or known exploit behavior, not authorized commands executed by a compromised admin account. The result is a fast, high-impact disruption that can spread across a corporate fleet in hours. 

For enterprises, the Stryker case reinforces the need for stronger identity protection, tighter administrator controls, and better monitoring of cloud management platforms. Privileged access should be minimized, account creation should be closely audited, and wipe capabilities should require strong checks before execution. In this incident, the attacker did not need an exploit or a virus; a stolen credential and a legitimate tool were enough to cripple a large organization.

Retailer Secures Website After Customer Data Leak Risk Identified


 

Express has quietly fixed a security flaw that permitted unauthorized access to customer order data following a significant lapse in web application security. This vulnerability exposed sensitive information ranging from customer names, emails, telephone numbers, shipping details, and partial payment data through search engine indexing, which resulted in an inadvertent public disclosure of order confirmation pages through search engine indexing.

There were at least a dozen such records appearing in search results, demonstrating that sequential order identifiers embedded within URLs may be exploited without sophisticated intrusion techniques. In a fraud investigation conducted by an independent security researcher, the issue was uncovered, which highlights how seemingly routine investigations can reveal deeper systemic weaknesses in data handling and access controls. The company was then able to take immediate and corrective measures.

A wide variety of personally identifiable information was disclosed in the exposed records, including customer name, phone number, email address, billing and delivery locations as well as masked payment card information, which was accessible via publicly accessible order confirmation pages. Initially, users could enumerate order records by altering parameters within the web address due to inadequate access controls and predictable URL patterns.

In investigating a suspicious transaction involving a family member, Rey Bango discovered that a simple search query could reveal unrelated customer orders that had previously been indexed by search engines when investigating a suspicious transaction. 

Upon the disclosure of this incident, Express, which is now owned by WHP Global, took steps to remediate the issue. However, the company has not yet clarified whether affected individuals will receive a formal notification. Despite reaffirming the organization's commitment to safeguarding consumer data and encouraging responsible reporting of vulnerabilities, Joe Berean did not outline a structured reporting process for vulnerabilities. 

A number of data exposure incidents have been linked to misconfigured web assets in the past year, reinforcing the persistent gaps in secure development practices as well as the challenges that enterprises must overcome when preventing unintended data leaks at large scales. 

The discovery emerged largely as an accident, resulting from Rey Bango's attempt to validate a potentially fraudulent transaction involving a family member's account after further investigation. In the absence of a clearly defined reporting channel, he escalated the issue by submitting a report in order to ensure prompt resolution. Based on his findings, search engines could surface unrelated records of customers by querying order numbers through indexed confirmation pages coupled with sequential order identifiers. 

As a result of independent verification, minor manipulations of URL parameters enabled the unauthorized access to other users' order histories and personal information, a vulnerability that could be amplified through automated enumeration. After the flaw was disclosed, Express addressed it, but the response evolved to clarify whether the affected customers would be notified and whether forensic logs could be used to determine the extent of unauthorized access. 

The company’s marketing head, Joe Berean, reinforced the company's commitment to data security, but offered limited transparency regarding incident response measures, such as the absence of information about a formal vulnerability disclosure framework or regulatory notification requirements. 

Despite persistent governance gaps, the lack of clarity regarding follow-up compliance, particularly concerning U.S. breach disclosure requirements, highlights these shortcomings. As seen in recent disclosures involving Home Depot and Petco, this episode aligns with a general pattern of exposure incidents that are related to misconfigurations. Because of overlooked security controls, sensitive customer data remains accessible, highlighting the ongoing challenges of enforcing robust web application security. 

The incident illustrates how relatively simple design oversights, such as predictable identifiers and improperly restricted web resources, can quickly morph into large-scale privacy risks, when combined with search engine indexing and absent disclosure mechanisms. 

The company has taken steps to resolve the immediate vulnerability, but the lack of clarity around notification to customers, audit logging, and formal vulnerability intake procedures raises concerns regarding incident readiness and accountability. 

Due to the expansion of digital commerce footprints, the case illustrates the necessity of incorporating secure-by-design principles, in addition to implementing robust access controls and maintaining transparent reporting mechanisms in order to address flaws before they become more serious. 

When these safeguards are not in place, even routine transactional systems can become unintentional points of vulnerability, reinforcing the necessity of continuous security validation throughout the lifecycle of an application.

Researchers Reproduce Anthropic-Style AI Vulnerability Findings Using Public Models at Low Cost

 


New research suggests that the ability to discover software vulnerabilities using artificial intelligence is becoming both inexpensive and widely accessible, raising concerns that advanced cyber capabilities may be spreading faster than anticipated.

A study by Vidoc Security demonstrates that vulnerability discovery techniques similar to those highlighted in Anthropic’s recent “Mythos” work can be reproduced using publicly available AI models. By leveraging GPT-5.4 and Claude Opus 4.6 within an open-source framework called opencode, researchers were able to replicate key findings for under $30 per scan, without access to Anthropic’s internal systems or restricted programs.

Anthropic had earlier positioned its Mythos research as highly sensitive, limiting access to a small group of major organizations and prompting concern across policy and financial circles. Reports indicated that senior figures, including Scott Bessent and Jerome Powell, discussed the implications alongside leading financial executives. The term “vulnpocalypse” resurfaced in cybersecurity discussions, reflecting fears of large-scale AI-driven exploitation.

The Vidoc team sought to test whether such capabilities were truly restricted. Using patched vulnerability examples referenced in Anthropic’s public materials, they examined issues affecting a file-sharing protocol, a security-focused operating system’s networking components, widely used video-processing software, and cryptographic libraries used for identity verification online.

Across three independent runs, both models successfully reproduced two of the documented vulnerability cases each time. Claude Opus 4.6 also independently rediscovered a flaw in OpenBSD in all three attempts, while GPT-5.4 failed to identify that specific issue. In other instances, including vulnerabilities tied to FFmpeg and wolfSSL, the systems correctly identified relevant code regions but did not fully determine the root cause.

The methodology closely mirrored workflows described by Anthropic. Instead of relying on a single prompt, the system first analyzed entire codebases, divided them into smaller segments, and ran parallel detection processes. These processes filtered meaningful signals from noise and cross-checked findings across files. Importantly, the selection of code segments was automated through earlier planning steps, rather than manually guided.

Despite these results, the study underlines a clear distinction. Anthropic’s system reportedly went beyond identifying vulnerabilities by constructing detailed exploit pathways, such as chaining code fragments across multiple network packets to achieve full remote control of a system. The public models, while capable of locating weaknesses, did not reach that level of execution.

According to researcher Dawid Moczadło, this indicates a new turn of events in cybersecurity economics. The most resource-intensive part of the process, identifying credible vulnerability signals, is becoming accessible to anyone with standard API access. However, validating those findings and converting them into reliable security insights or exploit strategies remains significantly more complex.

Anthropic itself has acknowledged that traditional benchmarks like Cybench are no longer sufficient to measure modern AI cyber capabilities, noting that its Mythos system exceeded those standards. The company estimated that comparable capabilities could become widespread within six to eighteen months.

The Vidoc findings suggest that, at least for vulnerability discovery, this transition may already be underway. By publishing their methodology, prompts, and results, the researchers highlight how open tools and commercially available models can replicate parts of workflows once considered highly restricted.

For organizations, the implications are instrumental. As AI reduces the cost and effort required to uncover software flaws, defenders may need to adopt continuous monitoring, faster remediation cycles, and deeper behavioral analysis. The challenge is no longer just identifying vulnerabilities, but managing the scale and speed at which they can now be discovered.

Fake Court Summons And Survey Scams Surge As Regions Bank Warns Of Rising Consumer Fraud Risks

 


Fear remains one of the most powerful tools scammers use, and today’s fraud tactics are evolving to exploit it more effectively than ever. Fake court summons and deceptive online survey scams are now being widely used to trick individuals into revealing sensitive information or making payments. Regions Bank has raised awareness around these threats, emphasizing that such schemes are designed to steal passwords, drain bank accounts, or silently install malware on personal devices. 

One of the more alarming trends involves fraudulent legal notices. Victims may receive messages claiming they missed a court date, failed to pay a toll, or owe a penalty. These alerts often create a sense of urgency, warning of arrest or severe consequences if immediate action is not taken. The goal is to push individuals into reacting quickly without verifying the information. Instead of legitimate resolution channels, these messages direct users to click suspicious links, scan QR codes, or call phone numbers that connect them directly to scammers.  

Although these communications can appear convincing, they often contain clear warning signs. Aggressive or threatening language, demands for immediate payment, and instructions to use unconventional methods such as gift cards or wire transfers are strong indicators of fraud. Genuine legal authorities follow formal processes and provide verifiable documentation, allowing individuals to confirm claims through official sources. Ignoring these red flags can lead to serious financial and data security consequences. Another emerging tactic involves fake CAPTCHA prompts. 

These scams exploit the familiarity of “I’m not a robot” verification tools but introduce unusual instructions, such as pressing specific keyboard shortcuts. What seems like a routine step can actually trigger hidden malicious code, potentially installing malware on the user’s device. Legitimate CAPTCHA systems are simple and never require complex or unexpected actions, making any deviation a likely sign of a scam. Survey scams represent another widespread threat. These schemes lure victims with promises of rewards such as cash, gift cards, or free products. After completing a series of questions, users are told they have “won” and are asked to provide payment details for a small fee. 

In reality, the reward never materializes, and the scammers gain access to valuable financial information. Organizations like the Better Business Bureau have noted a rise in such scams, highlighting unrealistic offers, vague company information, suspicious links, and poor grammar as common warning signs. If individuals encounter these scams, experts recommend deleting the message immediately, avoiding any engagement, and reporting the incident through official platforms such as the Internet Crime Complaint Center. Acting quickly is critical, especially if personal or financial information has already been shared. 

Ultimately, staying vigilant is the most effective defense. Avoid clicking on unknown links, verify information through trusted sources, enable multi-factor authentication, and regularly monitor financial accounts for unusual activity. These scams rely on urgency, fear, and enticing rewards to bypass rational thinking. While tactics continue to evolve, a cautious and informed approach remains the strongest way to protect against fraud in an increasingly digital environment.

Bank of America Bets Big on Risky Anthropic AI

 

Bank of America is aggressively expanding its use of Anthropic's advanced AI technology, even as U.S. regulators issue stark cybersecurity warnings. The bank's commitment highlights a broader trend where nearly 70% of financial institutions integrate AI into operations, prioritizing innovation over potential risks. This move comes amid global concerns about Anthropic's Claude Mythos Preview model, which has detected thousands of high-severity vulnerabilities in major operating systems and browsers. 

In early April 2026, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell urgently met with CEOs from top U.S. banks, including Bank of America, to flag risks from Mythos. Officials warned that deploying the model could expose customer personal data to cyber threats, prompting Anthropic to limit access to a select group of tech and banking experts. World leaders echoed these fears: Bank of England Governor Andrew Bailey called AI a "very serious challenge," while ECB President Christine Lagarde supported restrictions on the technology. 

Anthropic itself has cautioned about the dangers, stating that rapid AI progress could spread powerful vulnerability-detection capabilities to unsafe actors, with severe fallout for economies and national security. Despite this, banks like JPMorgan, Goldman Sachs, Citigroup, and Bank of America are testing Mythos to bolster their own defenses. Canadian regulators and European counterparts have also raised alarms, underscoring the technology's global implications. 

Bank of America leads in AI adoption, with over 90% of its 200,000+ employees using the tools daily and a client-facing AI assistant logging three billion interactions in 2025 alone. Backed by a $13.5 billion tech budget—including $4 billion for AI initiatives—the bank focuses on end-to-end process transformation to boost revenue, client experience, and efficiency. Recent rollouts include an AI tool for financial advisors to identify prospects and summarize meetings. 

Bank of America's CTO Hari Gopalkrishnan emphasized balancing scale with governance at the Semafor World Economy 2026 summit, noting, "If you overdo it, you stall innovation. If you underdo it, you introduce a lot of risk." The strategy shifts from small proofs-of-concept to large-scale applications, aiming for measurable ROI while navigating regulatory scrutiny. As AI reshapes banking, Bank of America's bold push tests the fine line between opportunity and peril.

Hackers Use Hidden QEMU Linux VMs to Evade Windows Security and Launch Stealth Attacks

 

Cybersecurity experts have uncovered a stealthy tactic where attackers bypass Windows defenses by running concealed Linux virtual machines using QEMU. Researchers warn that these hidden environments allow threat actors to maintain persistent access, steal sensitive data, and even deploy ransomware.

Earlier findings highlighted how Russian-linked groups exploited Microsoft Hyper-V to install covert Linux virtual machines on targeted systems. However, because enterprise environments typically restrict or closely monitor Hyper-V, attackers have shifted to less scrutinized alternatives.

Security firm Sophos reports active misuse of QEMU, which enables attackers to operate a full Linux system within a Windows host. Activities carried out inside these virtual machines are largely undetectable by endpoint protection tools such as Windows Defender.

“Rather than deploying a pre-built toolkit, the attackers manually install and compile their full attack suite within the VM, including Impacket, KrbRelayx, Coercer, BloodHound.py, NetExec, Kerbrute, Metasploit, and supporting libraries for Python, Rust, Ruby, and C++,” Sophos said in a report detailing active exploitation campaigns.

Attackers frequently rely on Alpine Linux, particularly version 3.22.0, due to its minimal size and low resource consumption. This allows the malicious VM to operate with almost no visible impact on the host system.

Once their objectives are achieved, attackers can simply shut down the VM, erase its image, and disappear without leaving significant traces.

“Attackers are drawn to QEMU and more common hypervisor-based virtualization tools like Hyper-V, VirtualBox, and VMware,” Sophos researchers said.

“Malicious activity within a virtual machine (VM) is essentially invisible to endpoint security controls and leaves little forensic evidence on the host itself.”

One group leveraging this technique is linked to the PayoutsKing ransomware campaign and tracked as STAC4713. In observed cases, attackers used QEMU to establish covert reverse SSH backdoors, enabling them to deploy additional malicious payloads.

Even though a basic QEMU setup can run without administrative privileges, attackers often escalate access by launching VMs under a SYSTEM account via scheduled tasks. They disguise virtual disk files as innocuous items like “vault.db” and later shift to obscure DLL filenames such as “birsv.dll.”

Through these hidden VMs, attackers create reverse SSH tunnels to remote servers, granting full control over compromised systems. They also exploit built-in Windows applications like Paint, Notepad, and Edge to explore network shares and access files.

Another threat actor, identified as STAC3725, deployed a QEMU-based VM in February to conduct credential harvesting and system reconnaissance. This setup enabled activities such as Kerberos enumeration, Active Directory mapping, and even running FTP servers for staging malware or exfiltrating data.

“The abuse of QEMU represents a growing evasion trend where threat actors leverage legitimate virtualization software to conceal malicious actions from endpoint protection agents and audit logs,” Sophos warns.

“A hidden VM with a pre-loaded or compiled attack toolkit can enable a threat actor to have long-term access to a network, providing the ability to deploy malware, harvest credentials, and move laterally without leaving evidence on the host itself.”

To mitigate such risks, researchers advise IT teams to regularly audit systems for unexpected QEMU installations and suspicious scheduled tasks, especially those running under SYSTEM-level privileges. Indicators of compromise may include unusual SSH port forwarding (particularly port 22), outbound SSH connections from uncommon ports, and virtual disk files with atypical extensions such as .db, .dll, or .qcow2.

Security Researchers Uncover QEMU-Powered Evasion in Payouts King Ransomware


 

Several recent incidents of ransomware activity attributed to the Payouts King operation have highlighted a systematic shift toward virtualization-assisted intrusions, with attackers embedding QEMU as an execution layer within compromised systems. 

QEMU instances can be configured as reverse SSH backdoors, enabling operators to create concealed virtual machines, which operate independently of a host system, effectively running malicious payloads and maintaining persistence outside the visibility of conventional endpoint security measures. 

In the course of the investigation, it has been revealed that at least two parallel campaigns have been identified, one directly connected with Payouts King and the other as a result of the exploitation of CitrixBleed 2 flaw. Both of the campaigns are leveraging the power of virtualization, not only for the purpose of evasion, but also for the purpose of staging post-exploitation campaigns. 

As part of their intrusion into these isolated environments, attackers use tools such as Rclone, Chisel, and BusyBox to obtain credential information, investigate Active Directory, enumerate Kerberos, and stage data via temporary FTP servers. 

In addition to this evolution, a broader operational trend is being observed in which ransomware actors, including suspected initial access brokers, are moving from traditional encrypt-and-extort models to layered intrusion strategies that emphasize stealth, extended access, and pre-encryption intelligence gathering, which reduces detection windows and challenges reliance on only file-based security indicators. 

In essence, QEMU is an open-source emulator and virtualizing framework that enables the running of full operating systems as virtual machines on a host, a capability that is increasingly being exploited by cyber criminals for malicious purposes. Due to the fact that host-based security controls do not provide visibility into processes executed within these isolated environments, attackers can leverage QEMU instances in order to deploy payloads, store tooling, and set up covert remote access channels using SSH without causing any disruption. 

There is precedent for using this technique, as it has been used in previous operations linked to the 3AM ransomware group, the LoudMiner campaign, and the CRON#TRAP activity cluster. The analysis conducted by Sophos in recent months provides an in-depth understanding of its operationalization across two distinct intrusion sets, including the Payouts King ransomware. This was observed since November 2025 and has been attributed to the Payouts King ransomware operation. 

It overlaps with activity associated with GOLD ENCOUNTER, which is known to target hypervisors and deploy encryptors within VMware and ESXi environments. Attackers create a scheduled task called TPMProfiler in this campaign that initiates a hidden QEMU virtual machine with SYSTEM privileges by using virtual disk images disguised as benign databases and DLLs as virtual disk images. 

Through carefully configured port forwarding, the adversary maintains isolation within the virtual layer while enabling reverse SSH access into the compromised host. Alpine Linux 3.22.0 is typically deployed in this environment, preloaded with offensive tools such as AdaptixC2, Chisel, BusyBox, and Rclone that facilitate communication, reconnaissance, and data movement between the various components of the system. This parallel campaign, identified in February as STAC3725, exploits CitrixBleed 2 (CVE20255777) flaw in NetScaler ADC and Gateway appliances in an attempt to gain initial access. 

Once credentials have been harvested and domain-level reconnaissance has been conducted, QEMU-based virtual environments can be employed to perform similar functions, strengthening virtualization’s role as a stealth-oriented, persistent attack substrate. 

An attacker initiates post-compromise activity by delivering a staged archive (an.zip) into systems breached through NetScaler as part of the STAC3725 intrusion chain. In this case, the embedded executable (an.exe) provisioned an AppMgmt service, created a privileged local account (CtxAppVCOMService), and established a persistent remote access channel through ScreenConnect's MSI package client application.

In the context of SYSTEM privileges, ScreenConnect establishes a connection to the relay infrastructure and orchestrates the delivery of a secondary archive containing the concealed QEMU environment. When unpacked with 7-Zip and executed by qemu-system-x86_64.exe, the Alpine Linux virtual machine can be booted from a custom disk image using the qcow2 driver. 

A separate layer of isolation allows adversaries to deliberately build their attack frameworks in situ instead of relying on pre-configured toolsets, including Impacket, KrbRelayx, Coercer, BloodHound.py, NetExec, Kerbrute, and Metasploit, as well as multi-language dependencies spanning Python, Rust, Ruby, and C++, within which they compile tools. 

A modular approach to postexploitation provides a variety of post-exploitation activities, including credential harvesting, Kerberos enumeration, Active Directory mapping, and data staging by using lightweight FTP services. As a result of these auxiliary actions, host-level manipulation continues, including enabling WDigest credential storage, installing forensic utilities to alter Microsoft Defender exclusions, executing reconnaissance commands, and loading vulnerable kernel drivers to weaken system defenses. 

Following-on activity varies from incident to incident, which further suggests a division of labor consistent with initial access broker ecosystems. Persistence mechanisms include enterprise deployment tools and peer-to-peer networking frameworks such as NetBird, along with attempts to extract browser session information and disable endpoint protection via scripting. 

Together, these operations reinforce the increasing use of virtualization-supported evasion, where malicious activity is effectively dispersed into transient, attacker-controlled environments that can be hidden from traditional monitoring techniques. 

In accordance with defensive guidance, it is imperative that anomalous QEMU deployments, unauthorized privilege-level scheduled tasks, irregular SSH tunneling behavior, and atypical virtual disk artifacts be detected, especially since Zscaler's intelligence indicates that this ransomware cluster is associated with tactics historically associated with BlackBasta affiliates, such as phishing via Microsoft Teams and the abuse of remote assistance tools. 

All in all, these findings indicate an increased level of operational maturity among the Payouts King ecosystem, which integrates stealth infrastructure, flexible access vectors, and virtualization-based execution into a cohesive attack model that extends far beyond conventional ransomware techniques. 

A Zscaler attribution report also confirms this trajectory, pointing to overlapping tradecraft such as spam-driven intrusion attempts, social engineering deployments via Microsoft Teams, and abuse of remote access utilities by former BlackBasta affiliates. 

It is important to note that the ransomware itself reflects this sophistication, consisting of high levels of obfuscation, anti-analysis safeguards, and persistence mechanisms embedded in scheduled tasks so as to actively terminate security processes through low-level system calls. Its encryption protocol, which uses AES-256 in CTR mode combined with RSA-4096 intermittent encryption for large files, demonstrates a calculated balance between speed and impact. 

As a result, extortion workflows direct victims to leak portals on the dark web. Due to increasing virtualization abuse blurring traditional endpoint visibility boundaries, defenders must shift their focus toward behavioral correlation, privilege anomaly detection, and deep examinations of orchestration patterns at the system level, as these campaigns reflect a broader shift towards ransomware operations that are designed to remain persistent, precise, and invisibly invisible within organizations.

Salesforce’s New “Headless 360” Lets AI Agents Run Its Platform

 


Salesforce has introduced what it describes as the most crucial architectural overhaul in its 27-year history, launching a new initiative called “Headless 360.” The update is designed to allow artificial intelligence agents to control and operate the company’s entire platform without requiring a traditional graphical interface such as a dashboard or browser.

The announcement was made during the company’s annual TDX developer conference in San Francisco, where Salesforce revealed that it is releasing more than 100 new developer tools and capabilities. These tools immediately enable AI systems to interact directly with Salesforce environments. The move reflects a deeper shift in enterprise software, where the rise of intelligent agents capable of reasoning and executing tasks is forcing companies to rethink whether conventional user interfaces are still necessary.

Salesforce’s answer to that question is direct: instead of designing software primarily for human interaction, the platform is now being rebuilt so that machines can access and operate it programmatically. According to the company, this transformation began over two years ago with a strategic decision to expose all internal capabilities rather than keeping them hidden behind user interfaces.

This shift is taking place during a period of uncertainty in the broader software industry. Concerns that advanced AI models developed by companies like OpenAI and Anthropic could disrupt traditional software business models have already impacted market performance. Industry indicators, including software-focused exchange-traded funds, have declined substantially, reflecting investor anxiety about the long-term relevance of existing SaaS platforms.

Senior leadership at Salesforce has indicated that the new architecture is based on practical challenges observed while deploying AI systems across enterprise clients. According to internal insights, building an AI agent is only the initial step. Organizations also face ongoing challenges related to development workflows, system reliability, updates, and long-term maintenance.

To address these challenges, Headless 360 is structured around three foundational pillars.

The first pillar focuses on development flexibility. Salesforce has introduced more than 60 tools based on Model Context Protocol, along with over 30 pre-configured coding capabilities. These allow external AI coding agents, including systems such as Claude Code, Cursor, Codex, and Windsurf, to gain direct, real-time access to a company’s Salesforce environment. This includes data, workflows, and underlying business logic. Developers are no longer required to use Salesforce’s own integrated development environment and can instead operate from any terminal or external setup.

In addition, Salesforce has upgraded its native development environment, Agentforce Vibes 2.0, by introducing an “open agent harness.” This system supports multiple agent frameworks, including those from OpenAI and Anthropic, and dynamically adjusts capabilities depending on which AI model is being used. The platform also supports multiple models simultaneously, including advanced systems like Claude Sonnet and GPT-5, while maintaining full awareness of the organization’s data from the start.

A notable technical enhancement is the introduction of native React support. During demonstrations, developers created a fully functional application using React instead of Salesforce’s traditional Lightning framework. The application connected to Salesforce data through GraphQL while still inheriting built-in security controls. This significantly expands front-end flexibility for developers.

The second pillar focuses on deployment. Salesforce has introduced an “experience layer” that separates how an AI agent functions from how it is presented to users. This allows developers to design an experience once and deploy it across multiple platforms, including Slack, mobile applications, Microsoft Teams, ChatGPT, Claude, Gemini, and other compatible environments. Importantly, this can be done without rewriting code for each platform. The approach represents a change from requiring users to enter Salesforce interfaces to delivering Salesforce-powered experiences directly within existing workflows.

The third pillar addresses trust, control, and scalability. Salesforce has introduced a comprehensive set of tools that manage the entire lifecycle of AI agents. These include systems for testing, evaluation, monitoring, and experimentation. A central component is “Agent Script,” a new programming language designed to combine structured, rule-based logic with the flexible reasoning capabilities of AI models. It allows organizations to define which parts of a process must follow strict rules and which parts can rely on AI-driven decision-making.

Additional tools include a Testing Center that identifies logical errors and policy violations before deployment, custom evaluation systems that define performance standards, and an A/B testing interface that allows multiple agent versions to run simultaneously under real-world conditions.

One of the key technical challenges addressed by Salesforce is the difference between probabilistic and deterministic systems. AI agents do not always produce identical results, which can create instability in enterprise environments where consistency is critical. Early adopters reported that once agents were deployed, even small modifications could lead to unpredictable outcomes, forcing teams to repeat extensive testing processes.

Agent Script was developed to solve this problem by introducing a structured framework. It defines agent behavior as a state machine, where certain steps are fixed and controlled while others allow flexible reasoning. This approach ensures both reliability and adaptability.

Salesforce also distinguishes between two types of AI system architectures. Customer-facing agents, such as those used in sales or support, require strict control to ensure they follow predefined rules and maintain brand consistency. These operate within structured workflows. In contrast, employee-facing agents are designed to operate more freely, exploring multiple paths and refining their outputs dynamically before presenting results. Both systems operate on a unified underlying architecture, allowing organizations to manage them without maintaining separate platforms.

The company is also expanding its ecosystem. It now supports integration with a wide range of AI models, including those from Google and other providers. A new marketplace brings together thousands of applications and tools, supported by a $50 million initiative aimed at encouraging further development.

At the same time, Salesforce is taking a flexible approach to emerging technical standards such as Model Context Protocol. Rather than relying on a single method, the company is offering APIs, command-line interfaces, and protocol-based integrations simultaneously to remain adaptable as the industry evolves.

A real-world example surfaced during the announcement demonstrated how one company built an AI-powered customer service agent in just 12 days. The system now handles approximately half of customer interactions, improving efficiency while reducing operational costs.

Finally, Salesforce is also changing its business model. The company is shifting away from traditional per-user pricing toward a consumption-based approach, reflecting a future where AI agents, rather than human users, perform the majority of work within enterprise systems.

This transformation suggests a new layer in strategic operations. Instead of resisting the rise of AI, Salesforce is restructuring its platform to align with it, betting that its existing data infrastructure, enterprise integrations, and accumulated operational logic will continue to provide value even as software becomes increasingly autonomous.