At the RE//verse 2026 event, security researcher Markus Gaasedelen introduced a technique called the “Bliss” double glitch. This method relies on manipulating electrical voltage at precise moments to interfere with the console’s startup process, effectively bypassing its built-in protections.
This marks the first known instance where the Xbox One’s hardware defenses have been broken in a way that others can replicate. The achievement is being compared to the Reset Glitch Hack that affected the Xbox 360, although this newer approach operates at a deeper level. Instead of targeting software vulnerabilities, it directly interferes with the boot ROM, a core component embedded in the console’s chip. By doing so, the exploit grants complete control over the system, including its most secure layers such as the hypervisor.
When the Xbox One was introduced in 2013, Microsoft designed it with an unusually strong security model. The system relied on multiple layers of encryption and authentication, linking firmware, the operating system, and game files into a tightly controlled verification chain. Within the company, it was even described as one of the most secure products Microsoft had ever built.
A substantial part of this design was its secure boot process. Unlike the Xbox 360, which was compromised through reset-line manipulation, the Xbox One removed such external entry points. It also incorporated a dedicated ARM-based security processor responsible for verifying every stage of the startup sequence. Without valid cryptographic signatures, no code was allowed to run. For many years, this approach appeared highly effective.
Rather than attacking these higher-level protections, the researcher focused on the physical behavior of the hardware itself. Traditional glitching techniques rely on disrupting timing signals, but the Xbox One’s architecture left little opportunity for that. Instead, the method used here involves voltage glitching, where the power supplied to the processor is briefly disrupted.
These momentary drops in voltage can cause the processor to behave unpredictably, such as skipping instructions or misreading operations. However, the timing must be extremely precise, as even a tiny variation can result in failure or system crashes.
To achieve this level of accuracy, specialized hardware tools were developed to monitor and control electrical signals within the system. This allowed the researcher to closely observe how the console behaves at the silicon level and identify the exact points where interference would be effective.
The resulting “Bliss” technique uses two carefully timed voltage disruptions during the startup process. The first interferes with memory protection mechanisms managed by the ARM Cortex subsystem. The second targets a memory-copy operation that occurs while the system is loading initial data. If both steps are executed correctly, the system is redirected to run code chosen by the attacker, effectively taking control of the boot process.
Unlike many modern exploits, this method does not depend on software flaws that can be corrected through updates. Instead, it targets the boot ROM, which is permanently embedded in the chip during manufacturing. Because this code cannot be modified, the vulnerability cannot be patched. As a result, the exploit allows unauthorized code execution across all system layers, including protected components.
With this level of access, it becomes possible to run alternative operating systems, extract encrypted firmware, and analyze internal system data. This has implications for both security research and digital preservation, as it enables deeper understanding of the console’s architecture and may support efforts to emulate its environment in the future.
Beyond research applications, the findings may also lead to practical tools. There is speculation that the technique could be adapted into hardware modifications similar to modchips, which automate the precise electrical conditions needed for the exploit. Such developments could revive longstanding debates around console modification and software control.
From a security perspective, the immediate impact on Microsoft may be limited, as the Xbox One is no longer the company’s latest platform. Newer systems have adopted updated security designs based on similar principles. However, the discovery serves a lesson for the industry: no system can be considered permanently secure, especially when attacks target the underlying hardware itself.
Opening a project in a code editor is supposed to be routine. In this case, it is enough to trigger a full malware infection.
Security researchers have linked an ongoing campaign associated with North Korean actors, tracked as Contagious Interview or WaterPlum, to a malware family known as StoatWaffle. Instead of relying on software vulnerabilities, the group is embedding malicious logic directly into Microsoft Visual Studio Code (VS Code) projects, turning a trusted development tool into the starting point of an attack.
The entire mechanism is hidden inside a file developers rarely question: tasks.json. This file is typically used to automate workflows. In these attacks, it has been configured with a setting that forces execution the moment a project folder is opened. No manual action is required beyond opening the workspace.
Research from NTT Security shows that the embedded task connects to an external web application, previously hosted on Vercel, to retrieve additional data. The same task operates consistently regardless of the operating system, meaning the behavior does not change between environments even though most observed cases involve Windows systems.
Once triggered, the malware checks whether Node.js is installed. If it is not present, it downloads and installs it from official sources. This ensures the system can execute the rest of the attack chain without interruption.
What follows is a staged infection process. A downloader repeatedly contacts a remote server to fetch additional payloads. Each stage behaves in the same way, reaching out to new endpoints and executing the returned code as Node.js scripts. This creates a recursive chain where one payload continuously pulls in the next.
StoatWaffle is built as a modular framework. One component is designed for data theft, extracting saved credentials and browser extension data from Chromium-based browsers and Mozilla Firefox. On macOS systems, it also targets the iCloud Keychain database. The collected information is then sent to a command-and-control server.
A second module functions as a remote access trojan, allowing attackers to operate the infected system. It supports commands to navigate directories, list and search files, execute scripts, upload data, run shell commands, and terminate itself when required.
Researchers note that the malware is not static. The operators are actively refining it, introducing new variants and updating existing functionality.
The VS Code-based delivery method is only one part of a broader campaign aimed at developers and the open-source ecosystem. In one instance, attackers distributed malicious npm packages carrying a Python-based backdoor called PylangGhost, marking its first known propagation through npm.
Another campaign, known as PolinRider, involved injecting obfuscated JavaScript into hundreds of public GitHub repositories. That code ultimately led to the deployment of an updated version of BeaverTail, a malware strain already linked to the same threat activity.
A more targeted compromise affected four repositories within the Neutralinojs GitHub organization. Attackers gained access by hijacking a contributor account with elevated permissions and force-pushed malicious code. This code retrieved encrypted payloads hidden within blockchain transactions across networks such as Tron, Aptos, and Binance Smart Chain, which were then used to download and execute BeaverTail. Victims are believed to have been exposed through malicious VS Code extensions or compromised npm packages.
According to analysis from Microsoft, the initial compromise often begins with social engineering rather than technical exploitation. Attackers stage convincing recruitment processes that closely resemble legitimate technical interviews. Targets are instructed to run code hosted on platforms such as GitHub, GitLab, or Bitbucket, unknowingly executing malicious components as part of the assessment.
The individuals targeted are typically experienced professionals, including founders, CTOs, and senior engineers in cryptocurrency and Web3 sectors. Their level of access to infrastructure and digital assets makes them especially valuable. In one recent case, attackers unsuccessfully attempted to compromise the founder of AllSecure.io using this approach.
Multiple malware families are used across these attack chains, including OtterCookie, InvisibleFerret, and FlexibleFerret. InvisibleFerret is commonly delivered through BeaverTail, although recent intrusions show it being deployed after initial access is established through OtterCookie. FlexibleFerret, also known as WeaselStore, exists in both Go and Python variants, referred to as GolangGhost and PylangGhost.
The attackers continue to adjust their techniques. Newer versions of the malicious VS Code projects have moved away from earlier infrastructure and now rely on scripts hosted on GitHub Gist to retrieve additional payloads. These ultimately lead to the deployment of FlexibleFerret. The infected projects themselves are distributed through GitHub repositories.
Security analysts warn that placing malware inside tools developers already trust significantly lowers suspicion. When the code is presented as part of a hiring task or technical assessment, it is more likely to be executed, especially under time pressure.
Microsoft has responded to the misuse of VS Code tasks with security updates. In the January 2026 release (version 1.109), a new setting disables automatic task execution by default, preventing tasks defined in tasks.json from running without user awareness. This setting cannot be overridden at the workspace level, limiting the ability of malicious repositories to bypass protections.
Additional safeguards were introduced in February 2026 (version 1.110), including a second prompt that alerts users when an auto-run task is detected after workspace trust is granted.
Beyond development environments, North Korean-linked operations have expanded into broader social engineering campaigns targeting cryptocurrency professionals. These include outreach through LinkedIn, impersonation of venture capital firms, and fake video conferencing links. Some attacks lead to deceptive CAPTCHA pages that trick victims into executing hidden commands in their terminal, enabling cross-platform infections on macOS and Windows. These activities overlap with clusters tracked as GhostCall and UNC1069.
Separately, the U.S. Department of Justice has taken action against individuals involved in supporting North Korea’s fraudulent IT worker operations. Audricus Phagnasay, Jason Salazar, and Alexander Paul Travis were sentenced after pleading guilty in November 2025. Two received probation and fines, while one was sentenced to prison and ordered to forfeit more than $193,000 obtained through identity misuse.
Officials stated that such schemes enable North Korean operatives to generate revenue, access corporate systems, steal proprietary data, and support broader cyber operations. Separate research from Flare and IBM X-Force indicates that individuals involved in these programs undergo rigorous training and are considered highly skilled, forming a key part of the country’s strategic cyber efforts.
What this means
This attack does not depend on exploiting a flaw in software. It depends on exploiting trust.
By embedding malicious behavior into tools, workflows, and hiring processes that developers rely on every day, attackers are shifting the point of compromise. In this environment, opening a project can be just as risky as running an unknown program.
Microsoft has issued an out-of-band (OOB) security update to remediate critical vulnerabilities affecting a specific subset of Windows 11 Enterprise systems that rely on hotpatch updates instead of the conventional monthly Patch Tuesday cumulative updates.
The update, identified as KB5084597, was released to fix multiple security flaws in the Windows Routing and Remote Access Service (RRAS), a built-in administrative tool used for configuring and managing remote connectivity and routing functions within enterprise networks. According to Microsoft’s official advisory, these vulnerabilities could allow remote code execution if a system connects to a malicious or attacker-controlled server through the RRAS management interface.
Microsoft clarified that the risk is limited to narrowly defined scenarios. The exposure primarily impacts Enterprise client devices that are enrolled in the hotpatch update model and are actively used for remote server management. This means that the vulnerability does not broadly affect all Windows users, but rather a specific operational environment where administrative tools interact with external systems.
The vulnerabilities addressed in this update are tracked under three identifiers: CVE-2026-25172, CVE-2026-25173, and CVE-2026-26111. These issues were initially resolved as part of Microsoft’s March 2026 Patch Tuesday updates, which were released on March 10. However, the original fixes required system reboots to be fully applied.
Microsoft’s technical description indicates that successful exploitation would require an attacker to already possess authenticated access within a domain. The attacker could then use social engineering techniques to trick a domain-joined user into initiating a connection request to a malicious server via the RRAS snap-in management tool. Once the connection is made, the vulnerability could be triggered, allowing the attacker to execute arbitrary code on the targeted system.
The KB5084597 hotpatch is cumulative in nature, meaning it incorporates all previously released fixes and improvements included in the March 2026 security update package. This ensures that systems receiving the hotpatch are brought up to the same security level as those that installed the full cumulative update.
A key reason for releasing this hotpatch separately is the operational challenge associated with system restarts. Many enterprise environments run mission-critical workloads where even brief downtime can disrupt services, impact business continuity, or affect essential infrastructure. Traditional cumulative updates require a reboot, making them less practical in such contexts.
Hotpatching addresses this challenge by applying security fixes directly into the memory of running processes. This allows vulnerabilities to be mitigated immediately without interrupting system operations. Simultaneously, the update also modifies the relevant files stored on disk so that the fixes remain effective after the next scheduled reboot, maintaining long-term system integrity.
Microsoft also noted that while fixes for these vulnerabilities had been released earlier, the hotpatch update was reissued to ensure more comprehensive protection across all affected deployment scenarios. This suggests that the company identified gaps in earlier coverage or aimed to standardize protection for systems using different update mechanisms.
It is important to note that this hotpatch is not distributed to all devices. It is only available to systems that are enrolled in Microsoft’s hotpatch update program and are managed through Windows Autopatch, a cloud-based service that automates update deployment for enterprise environments. Eligible systems will receive and apply the update automatically, without requiring user intervention or a system restart.
From a broader security standpoint, this development surfaces the increasing complexity of patch management in modern enterprise environments. As organizations adopt high-availability systems that must remain continuously operational, traditional update strategies are evolving to include alternatives such as hotpatching.
At the same time, vulnerabilities in administrative tools like RRAS demonstrate how trusted system components can become entry points for attackers when combined with social engineering and authenticated access. Even though exploitation requires specific conditions, the potential impact remains substantial due to the elevated privileges typically associated with administrative tools.
Security experts generally emphasize that organizations must go beyond simply applying patches. Continuous monitoring, strict access control policies, and user awareness training are essential to reducing the likelihood of such attack scenarios. Additionally, maintaining visibility into how administrative tools are used within a network can help detect unusual behavior before it leads to compromise.
Overall, Microsoft’s release of this hotpatch reflects both the urgency of addressing critical vulnerabilities and the need to adapt security practices to environments where uptime is as important as protection.
A recently noticed configuration inside Microsoft Copilot may allow the AI tool to reference activity from several other Microsoft platforms, prompting renewed discussion around data privacy and AI personalization. The option, which appears within Copilot’s settings, enables the assistant to use information connected to services such as Bing, MSN, and the Microsoft Edge browser. Users who are uncomfortable with this level of integration can switch the feature off.
Like many modern artificial intelligence systems, Copilot attempts to improve the usefulness of its responses by understanding more about the person interacting with it. The assistant normally does this by remembering past conversations and storing certain details that users intentionally share during chats. These stored elements help the AI maintain context across multiple interactions and generate responses that feel more tailored.
However, a specific configuration called “Microsoft usage data” expands that capability. According to reporting first highlighted by the technology outlet Windows Latest, this setting allows Copilot to reference information associated with other Microsoft services a user has interacted with. The option appears within the assistant’s Memory controls and is available through both the Copilot website and its mobile applications. Observers believe the setting was introduced recently as part of Microsoft’s effort to strengthen personalization features in its AI tools.
The Memory feature in Copilot is designed to help the assistant retain useful context. Through this system, the AI can recall earlier conversations, remember instructions or factual information shared by users, and potentially reference certain account-linked activity from other Microsoft products. The idea is that by understanding more about a user’s interests or previous discussions, the assistant can provide more relevant answers.
In practice, such capabilities can be helpful. For instance, a user who discussed a topic with Copilot previously may want to continue that conversation later without repeating the entire background. Similarly, individuals seeking guidance about personal or professional matters may receive more relevant suggestions if the assistant has some awareness of their preferences or circumstances.
Despite the convenience, the feature also raises questions about privacy. Some users may be concerned that allowing an AI assistant to accumulate information from multiple services could expose more personal data than expected. Others may want to know how that information is used beyond personalizing conversations.
Microsoft addresses these concerns in its official Copilot documentation. In its frequently asked questions section, the company states that user conversations are processed only for limited purposes described in its privacy policies. According to Microsoft, this information may be used to evaluate Copilot’s performance, troubleshoot operational issues, identify software bugs, prevent misuse of the service, and improve the overall quality of the product.
The company also says that conversations are not used to train AI models by default. Model training is controlled through a separate configuration, which users can choose to disable if they do not want their interactions contributing to AI development.
Microsoft further clarifies that Copilot’s personalization settings do not determine whether a user receives targeted advertisements. Advertising preferences are managed through a different option available in the Microsoft account privacy dashboard. Users who want to stop personalized advertising must adjust the Personalized ads and offers setting separately.
Even with these explanations, privacy concerns remain understandable, particularly because Microsoft documentation indicates that Copilot’s personalization features may already be activated automatically in some cases. When reviewing the settings on a personal device, these options were found to be switched on. Users who prefer not to allow Copilot to access broader usage data may therefore wish to disable them.
Checking these settings is straightforward. Users can open Copilot through its website or mobile application and ensure they are signed in with their Microsoft account. On the web interface, selecting the account name at the bottom of the left-hand panel opens the Settings menu, where the Memory section can be accessed. In the mobile application, the same controls are available through the side navigation menu by tapping the account name and choosing Memory.
Inside the Memory settings, users will see a general control labeled “Personalization and memory.” Two additional options appear beneath it: “Facts you’ve shared,” which stores information provided directly during conversations, and “Microsoft usage data,” which allows Copilot to reference activity from other Microsoft services.
To limit this behavior, users can switch off the Microsoft usage data toggle. They may also disable the broader Personalization and memory option if they prefer that the AI assistant does not retain contextual information about their interactions. Copilot also provides a “Delete all memory” function that removes all stored data from the system. If individual personal details have been recorded, they can be reviewed and deleted through the editing option next to “Facts you’ve shared.”
Security and privacy experts generally advise caution when sharing information with AI assistants, even when personalization features remain enabled. Sensitive or confidential details should not be entered into conversations. Microsoft itself recommends avoiding the disclosure of certain types of highly personal data, including information related to health conditions or sexual orientation.
The broader development reflects a growing trend in the technology industry. As AI assistants become integrated across multiple platforms and services, companies are increasingly using cross-service data to make these tools more helpful and personalized. While this approach can improve convenience and usability, it also underlines the grave necessity for transparent privacy controls so users remain aware of how their information is being used and can adjust those settings when necessary.
Microsoft’s new Threat Intelligence report reveals that threat actors are using genAI tools for various tasks, such as phishing, surveillance, malware building, infrastructure development, and post-hack activity.
In various incidents, AI helps to create phishing emails, summarize stolen information, debug malware, translate content, and configure infrastructure. “Microsoft Threat Intelligence has observed that most malicious use of AI today centers on using language models for producing text, code, or media. Threat actors use generative AI to draft phishing lures, translate content, summarize stolen data, generate or debug malware, and scaffold scripts or infrastructure,” the report said.
"For these uses, AI functions as a force multiplier that reduces technical friction and accelerates execution, while human operators retain control over objectives, targeting, and deployment decisions,’ warns Microsoft.
Microsoft found different hacking gangs using AI in their cyberattacks, such as North Korean hackers known as Coral Sleet (Storm-1877) and Jasper Sleet (Storm-0287), who use the AI in their remote IT worker scams.
The AI helps to make realistic identities, communications, and resumes to get a job in Western companies and have access once hired. Microsoft also explained how AI is being exploited in malware development and infrastructure creation. Threat actors are using AI coding tools to create and refine malicious code, fix errors, and send malware components to different programming languages.
A few malware experiments showed traces of AI-enabled malware that create scripts or configure behaviour at runtime. Microsoft found Coral Sleet using AI to make fake company sites, manage infrastructure, and troubleshoot their installations.
When security analysts try to stop the use of AI in these attacks, Microsoft says hackers are using jailbreaking techniques to trick AI into creating malicious code or content.
Besides generative AI use, the report revealed that hackers experiment with agentic AI to do tasks autonomously. The AI is mainly used for decision-making currently. As IT worker campaigns depend on the exploitation of authentic access, experts have advised organizations to address these attacks as insider risks.
A now-patched security weakness in GitHub Codespaces revealed how artificial intelligence tools embedded in developer environments can be manipulated to expose sensitive credentials. The issue, discovered by cloud security firm Orca Security and named RoguePilot, involved GitHub Copilot, the AI coding assistant integrated into Codespaces. The flaw was responsibly disclosed and later fixed by Microsoft, which owns GitHub.
According to researchers, the attack could begin with a malicious GitHub issue. An attacker could insert concealed instructions within the issue description, specifically crafted to influence Copilot rather than a human reader. When a developer launched a Codespace directly from that issue, Copilot automatically processed the issue text as contextual input. This created an opportunity for hidden instructions to silently control the AI agent operating within the development environment.
Security experts classify this method as indirect or passive prompt injection. In such attacks, harmful instructions are embedded inside content that a large language model later interprets. Because the model treats that content as legitimate context, it may generate unintended responses or perform actions aligned with the attacker’s objective.
Researchers also described RoguePilot as a form of AI-mediated supply chain attack. Instead of exploiting external software libraries, the attacker leverages the AI system integrated into the workflow. GitHub allows Codespaces to be launched from repositories, commits, pull requests, templates, and issues. The exposure occurred specifically when a Codespace was opened from an issue, since Copilot automatically received the issue description as part of its prompt.
The manipulation could be hidden using HTML comment tags, which are invisible in rendered content but still readable by automated systems. Within those hidden segments, an attacker could instruct Copilot to extract the repository’s GITHUB_TOKEN, a credential that provides elevated permissions. In one demonstrated scenario, Copilot could be influenced to check out a specially prepared pull request containing a symbolic link to an internal file. Through techniques such as referencing a remote JSON schema, the AI assistant could read that internal file and transmit the privileged token to an external server.
The RoguePilot disclosure comes amid broader concerns about AI model alignment. Separate research from Microsoft examined a reinforcement learning method called Group Relative Policy Optimization, or GRPO. While typically used to fine-tune large language models after deployment, researchers found it could also weaken safety safeguards, a process they labeled GRP-Obliteration. Notably, training on even a single mildly problematic prompt was enough to make multiple language models more permissive across harmful categories they had never explicitly encountered.
Additional findings stress upon side-channel risks tied to speculative decoding, an optimization technique that allows models to generate multiple candidate tokens simultaneously to improve speed. Researchers found this process could potentially reveal conversation topics or identify user queries with significant accuracy.
Further concerns were raised by AI security firm HiddenLayer, which documented a technique called ShadowLogic. When applied to agent-based systems, the concept evolves into Agentic ShadowLogic. This approach involves embedding backdoors at the computational graph level of a model, enabling silent modification of tool calls. An attacker could intercept and reroute requests through infrastructure under their control, monitor internal endpoints, and log data flows without disrupting normal user experience.
Meanwhile, Neural Trust demonstrated an image-based jailbreak method known as Semantic Chaining. This attack exploits limited reasoning depth in image-generation models by guiding them through a sequence of individually harmless edits that gradually produce restricted or offensive content. Because each step appears safe in isolation, safety systems may fail to detect the evolving harmful intent.
Researchers have also introduced the term Promptware to describe a new category of malicious inputs designed to function like malware. Instead of exploiting traditional code vulnerabilities, promptware manipulates large language models during inference to carry out stages of a cyberattack lifecycle, including reconnaissance, privilege escalation, persistence, command-and-control communication, lateral movement, and data exfiltration.
Collectively, these findings demonstrate that AI systems embedded in development platforms are becoming a new attack surface. As organizations increasingly rely on intelligent automation, safeguarding the interaction between user input, AI interpretation, and system permissions is critical to preventing misuse within trusted workflows.
Cybersecurity experts found 17 extensions for Chrome, Edge, and Firefox browsers which track user's internet activity and install backdoors for access. The extensions were downloaded over 840,000 times.
The campaign is not new. LayerX claimed that the campaign is part of GhostPoster, another campaign first found by Koi Security last year in December. Last year, researchers discovered 17 different extensions that were downloaded over 50,000 times and showed the same monitoring behaviour and deploying backdoors.
Few extensions from the new batch were uploaded in 2020, exposing users to malware for years. The extensions appeared in places like the Edge store and later expanded to Firefox and Chrome.
Few extensions stored malicious JavaScript code in the PNG logo. The code is a kind of instruction on downloading the main payload from a remote server.
The main payload does multiple things. It can hijack affiliate links on famous e-commerce websites to steal money from content creators and influencers. “The malware watches for visits to major e-commerce platforms. When you click an affiliate link on Taobao or JD.com, the extension intercepts it. The original affiliate, whoever was supposed to earn a commission from your purchase, gets nothing. The malware operators get paid instead,” said Koi researchers.
After that, it deploys Google Analytics tracking into every page that people open, and removes security headers from HTTP responses.
In the end, it escapes CAPTCHA via three different ways, and deploy invisible iframes that do ad frauds, click frauds, and tracking. These iframes disappear after 15 seconds.
Besides this, all extensions were deleted from the repositories, but users shoul also remove them personally.
This staged execution flow demonstrates a clear evolution toward longer dormancy, modularity, and resilience against both static and behavioral detection mechanisms,” said LayerX.
The PNG steganography technique is employed by some. Some people download JavaScript directly and include it into each page you visit. Others employ bespoke ciphers to encode the C&C domains and use concealed eval() calls. The same assailant. identical servers. many methods of delivery. This appears to be testing several strategies to see which one gets the most installs, avoids detection the longest, and makes the most money.
This campaign reflects a deliberate shift toward patience and precision. By embedding malicious code in images, delaying execution, and rotating delivery techniques across identical infrastructure, the attackers test which methods evade detection longest. The strategy favors longevity and profit over speed, exposing how browser ecosystems remain vulnerable to quietly persistent threats.