Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Microsoft. Show all posts

Europe Pushes to Reduce Dependence on U.S. Tech as Sovereign Digital Infrastructure Gains Momentum

 




Several European governments are trying to reduce their dependence on American software, cloud platforms, and digital infrastructure as debates around data control, political influence, and technological independence become more intense across the region.

The situation has exposed contradictions in Europe’s relationship with U.S. technology companies. Microsoft chief executive Satya Nadella has largely stayed away from the kind of political messaging often associated with Alex Karp. Despite this difference, France has started moving parts of its public systems away from Microsoft Windows while simultaneously renewing contracts linked to Palantir Technologies through its domestic intelligence agency.

This complicated approach shows how Europe is attempting to distance itself from American tech firms without fully breaking away from them. Many governments now believe that relying too heavily on foreign technology companies can also mean depending on foreign laws, political priorities, and corporate influence. Still, Europe’s response has not followed one common strategy, with many actions appearing fragmented or reactive.

Much of the debate intensified after the U.S. passed the CLOUD Act in 2018 during President Donald Trump’s first term. The law gives American authorities the ability to request data from U.S.-based technology companies even if that information is stored outside the United States. For European officials, this raised concerns that storing data inside Europe may no longer be enough to fully protect sensitive information from foreign legal access.

Healthcare data quickly became one of the strongest examples used in these discussions. Medical records are considered among the most sensitive forms of information governments hold because they contain deeply personal details tied to citizens. Even after the CLOUD Act came into force, the United Kingdom partnered with companies including Google, Microsoft, and Palantir Technologies during the COVID-19 pandemic for projects involving National Health Service data.

Critics have argued that such partnerships could expose public-sector information to outside influence. France later decided that its Health Data Hub would stop using Microsoft Azure infrastructure and move toward what officials described as a sovereign cloud model. The contract was awarded to Scaleway, a cloud provider owned by French telecommunications group Iliad. Scaleway has also been expanding its network of data centers across Europe.

Scaleway later became one of four companies selected in a €180 million sovereign cloud contract backed by the European Commission. The program is intended to support cloud services that operate under European legal and regulatory standards. Notably, the European Sovereign Cloud initiative launched by Amazon Web Services was not included among the selected providers, even though Amazon created the project to answer European concerns about digital sovereignty.

Questions have also emerged around whether some so-called sovereign alternatives remain partly tied to American technology companies underneath. Some observers pointed to S3NS, a joint venture involving French defense company Thales Group and Google Cloud. Critics worry that arrangements like these could still leave room for indirect U.S. access or legal exposure despite being promoted as trusted European solutions.

Europe has faced similar problems in the search engine market. French search company Qwant was previously recommended for public servants in France while relying on Microsoft Bing’s underlying search infrastructure. The relationship later deteriorated after Qwant accused Microsoft of taking advantage of its dominant position in the market. Although French regulators declined to act against Microsoft, Qwant eventually started searching for alternatives on its own.

Qwant later partnered with German nonprofit search platform Ecosia to launch Staan, a Europe-based search index designed to reduce reliance on Google and Bing technologies. The project focuses on privacy and regional control over search infrastructure. Even so, both companies remain far smaller than their American competitors. Ecosia, despite having around 20 million users, still operates on a completely different scale compared to Google’s global user base.

One of the biggest problems facing European technology firms is market dominance from American companies. U.S. providers continue to control large parts of cloud computing, enterprise software, internet search, and artificial intelligence markets because of their global infrastructure, financial resources, and established ecosystems. European officials hope that large public-sector contracts could help regional providers compete more effectively.

Besides Scaleway, the European Commission’s sovereign cloud program also selected French companies Clever Cloud and OVHcloud, along with STACKIT. STACKIT was developed by the Schwarz Group, the parent company of Lidl, originally for its own internal systems before later being turned into a commercial cloud service.

Supporters of the initiative believe government-backed contracts could encourage more European companies to invest in domestic infrastructure instead of depending on foreign cloud providers. Backers of the program have also said the project aims to encourage digital solutions that align with European laws, governance rules, and privacy standards.

Still, Europe’s strategy of distributing contracts across several companies may create another challenge. While diversification could reduce dependence on one dominant provider and improve resilience, it may also make it harder for Europe to build a single technology giant capable of competing globally with firms such as Microsoft, Amazon, or Google.

Some critics also view sovereign tech partly as an economic strategy meant to keep European spending within the region. However, Europe’s attempts to move away from U.S. technology have not always translated into direct support for startups. In several cases, governments have instead turned toward open-source software alternatives.

France has already started replacing parts of its Windows-based systems with Linux. Public institutions in Germany, Denmark, Austria, and Italy are also exploring alternatives to Microsoft’s office software products through platforms such as LibreOffice.

Several governments have also embraced a “build instead of buy” approach by creating internal software tools. That strategy has faced criticism from parts of the technology and financial sectors. France’s Court of Auditors reportedly questioned spending linked to Visio, an internally developed platform intended to act as an alternative to Zoom and Microsoft Teams.

French newspaper Les Echos also reported frustration from parts of the country’s technology sector. Some critics argued that if governments themselves do not consistently adopt domestic technology tools, it becomes difficult to convince large private companies to do the same.

Many giants of European businesses continue selecting American technology providers when they offer stronger technical or commercial advantages. German airline Lufthansa chose Starlink for onboard internet services. Air France also selected Starlink despite partial ownership ties to the French and Dutch governments. Reports have additionally suggested that France’s national railway operator SNCF may eventually adopt similar services.

The debate around European alternatives has become particularly visible in satellite communications. During a disagreement involving Poland, Elon Musk stated publicly that “there is no substitute for Starlink.” European governments are now trying to prove otherwise by investing in domestic telecommunications and space infrastructure projects.

Public sentiment has also started influencing the discussion. After President Trump threatened to take control of Greenland, applications encouraging consumers to boycott American products surged in popularity on Denmark’s App Store rankings. The reaction showed that calls to reduce dependence on U.S. companies are no longer limited to policymakers and regulators.

Pressure is also building on European governments to reconsider contracts involving controversial American firms. Palantir’s recent public messaging and political positioning have drawn criticism inside parts of the European Union and the United Kingdom. At the same time, many European officials and citizens have started distancing themselves from X, formerly Twitter, because of growing dissatisfaction around platform governance and political discourse.

American technology companies have also shown that Europe is not always their top commercial priority. When Meta delayed the European release of Threads because of regulatory concerns tied to EU laws, it reinforced the perception that large U.S. firms can afford to deprioritize the region when legal requirements become too restrictive.

At the same time, this environment is opening new opportunities for companies building products specifically designed for European markets, languages, and legal standards. Supporters of the EuroStack initiative are pushing for rules that would encourage or require public institutions to purchase locally developed technology whenever possible.

Backers of sovereign tech also hope European companies can eventually compete internationally rather than only within domestic markets. French artificial intelligence company Mistral AI has reportedly experienced strong revenue growth as some businesses search for alternatives to OpenAI. Meanwhile, the governments of Canada and Germany are supporting cooperation between Cohere and Aleph Alpha to create what supporters describe as a transatlantic AI platform for governments and businesses.

As geopolitical tensions continue reshaping the global technology industry, some companies are discovering that not being American, Chinese, or Russian is itself becoming a commercial advantage in international markets.

France’s Break From Microsoft Signals Europe’s Growing Push for Digital Sovereignty


In a move that reflects Europe’s deepening concerns over data sovereignty and foreign technological dependence, France has decided to move its national Health Data Hub away from Microsoft's cloud infrastructure and into the hands of domestic provider Scaleway. The decision marks one of the most significant shifts yet in Europe’s growing effort to reclaim control over sensitive public data. 
 
The Health Data Hub contains medical information relating to millions of French citizens and serves as a major research platform for healthcare analysis and innovation. Since 2019, the system had been hosted on Microsoft Azure, a decision that triggered years of political and legal controversy due to fears surrounding American surveillance laws and extraterritorial access to European data.   
 
French authorities have now selected Scaleway, a subsidiary of Iliad, after an extensive evaluation involving more than 350 technical criteria related to security, resilience, and operational capacity. The migration is expected to be completed between late 2026 and early 2027.   
 

Why Europe Is Growing Wary of American Cloud Giants 

 
The decision is part of a much broader European movement toward what policymakers increasingly describe as “digital sovereignty.” Governments across Europe have become increasingly uneasy about relying on American technology firms for critical infrastructure, especially after repeated debates surrounding the US CLOUD Act, which can compel US companies to provide data to American authorities even if that data is stored overseas.  
 
In France, these concerns intensified after Microsoft reportedly acknowledged before a French Senate inquiry that it could not fully resist certain US government data requests involving French citizens. That revelation significantly strengthened calls for sovereign cloud infrastructure controlled entirely within European legal jurisdiction. The shift also aligns with France’s wider technological repositioning. Earlier this year, the country announced plans to reduce reliance on Microsoft products across government systems, replacing several US-based platforms with domestic or open source alternatives.   
 

A Defining Moment for Europe’s Tech Independence 

 
France’s decision extends beyond healthcare infrastructure as it clearly represents a symbolic turning point in Europe’s evolving relationship with Big Tech. 
 
For years, European nations depended heavily on American cloud providers because of their scale, maturity, and technological dominance. But growing geopolitical tensions, concerns around privacy, and the strategic importance of data have begun reshaping that equation. 
 
By transferring one of its most sensitive national databases to a domestic provider, France is effectively signalling that technological convenience can no longer outweigh sovereignty concerns. The move may now encourage other European governments to reassess where their own critical data resides. 
 
At its core, this is no longer simply a cloud migration story. It is a declaration that, in the age of AI and mass data infrastructure, control over information has become inseparable from national security itself.

Google Promotes ChromeOS Flex as Free Upgrade Option for Millions of Unsupported Windows 10 PCs

 





More than 500 million devices currently running Windows 10 are approaching a critical turning point, as many of them are not eligible for an upgrade to Windows 11 due to hardware limitations. This has raised growing concerns about long-term security risks once support deadlines pass. In response, Google is actively promoting an alternative, positioning its ChromeOS Flex platform as a free way to modernize aging systems.

Google states that older laptops and desktops can be converted into faster, more secure, and easier-to-manage devices by installing ChromeOS Flex. The system is cloud-based and designed to extend the usability of existing hardware without requiring users to purchase new machines. Although ChromeOS Flex has been available for some time, Google has now made adoption simpler by introducing a physical USB installation kit. Developed in partnership with Back Market, the kit allows users to install the operating system more easily. It is priced at approximately $3 or €3, is reusable, and is supported by recycling-focused efforts such as Closing the Loop to reduce electronic waste.

The timing of this push is closely linked to Microsoft’s decision to end mainstream support for Windows 10 in October 2025. That shift has forced users into a difficult position: invest in new hardware or continue using an operating system that will no longer receive full security updates. While Microsoft does offer an Extended Security Updates (ESU) program, it is only a temporary solution. For individual users, coverage extends for roughly one additional year, while enterprise customers may receive longer support under specific licensing agreements.

The transition to Windows 11 has also been slower than expected. Adoption challenges, largely driven by strict hardware requirements, have resulted in an unusually large number of users remaining on Windows 10 even after its official lifecycle milestone. This contrasts with Microsoft’s earlier expectations of a smoother migration similar to the shift from Windows 7 to Windows 10, which had seen broader and faster adoption.

Google is also emphasizing environmental considerations as part of its messaging. The company highlights that manufacturing a new laptop contributes significantly to its overall carbon footprint. By extending the lifespan of existing devices, ChromeOS Flex helps reduce landfill waste and avoids emissions associated with producing new hardware. Google further claims that ChromeOS-based systems consume around 19% less energy on average compared to similar platforms.

Despite this, switching away from Windows remains a debated decision. Many users rely on the Windows ecosystem for software compatibility, workflows, and familiarity. However, for devices that cannot support Windows 11, alternatives such as ChromeOS Flex present a practical workaround. Even in cases where users purchase new computers, older machines can still be repurposed using such operating systems, for example within households.

At the same time, Microsoft is continuing to strengthen its Windows 11 ecosystem. Devices already running Windows 11 are being automatically updated to newer versions to maintain consistent security coverage. The company is using artificial intelligence to determine when systems are ready for upgrades and applying updates accordingly. While a similar approach could theoretically be applied to Windows 10 devices that meet upgrade requirements, this has not yet been implemented. It remains uncertain whether this could change as future deadlines approach.

Recent developments have also drawn attention to user hesitation around Windows 11. Reports indicated that a recent update disrupted a key Start menu function, even as official communication suggested there were no outstanding issues. Subsequent updates and documentation now indicate that previously known bugs have been resolved, with Microsoft steadily addressing issues since the platform’s release in late 2024.

Additional reporting suggests that all known issues in the current Windows 11 version have been marked as resolved in official tracking systems. This reflects ongoing improvements, though it also underlines the complexity of maintaining stability across large-scale operating system deployments.

For enterprise users, Microsoft is extending support in more flexible ways. Certain legacy versions of Windows 10, including enterprise and IoT editions released in 2016, are eligible for additional security updates. These updates are delivered through ESU programs available via volume licensing or cloud solution providers. However, Microsoft continues to describe this as a temporary measure rather than a permanent extension.

For individual users, the situation is more restrictive. Extended Security Updates are limited in duration, and once they expire, devices will no longer receive security patches, bug fixes, or technical support. However, the continued availability of such programs suggests that support timelines may evolve depending on broader user adoption patterns.

The wider ecosystem is also seeing alternative recommendations. Some industry discussions encourage migration to Linux-based systems, while Google’s ChromeOS Flex represents a more consumer-friendly option. With hundreds of millions of devices affected, the coming months will play a crucial role in determining whether users remain within the Windows ecosystem or begin shifting toward alternative platforms.


Microsoft Releases AI Upgrades, Launches Copilot Cowork to Early Access Customers


In an effort to enhance its AI offering and increase adoption, Microsoft (MSFT.O) recently introduced new features in its Copilot research assistant that would enable users to employ various AI models concurrently within the same workflow.

Instead of relying on a single model, Copilot's Researcher agent can now pull outputs from both OpenAI's GPT and Anthropic's Claude models for each response, thanks to a new feature called "Critique."

According to Microsoft, Claude will check the quality and correctness of the response before GPT provides it to the user. In the future, the business hopes to make that workflow bidirectional so that GPT may also evaluate Claude's writings.

"Having different models from ​different vendors in Copilot is highly attractive - but we're taking this to the next level, where customers actually get the benefits of the models working together," Nicole Herskowitz, VP of Copilot and  Microsoft, said to Reuters. 

The multi-model strategy will assist in increasing productivity and quality for customers by accelerating user workflow, controlling AI hallucinations, which occur when systems give incorrect information, and producing more dependable outputs.

Additionally, Microsoft is introducing a feature called "Council" that will let users compare results from various AI models side by side. The updates coincide with Microsoft expanding access to its new Copilot Cowork agentic AI tool for members of its "Frontier" program, which gives users early access to some of its most recent AI innovations.

According to Jared Spataro, Microsoft's AI-at-Work efforts leader, “We work only in a cloud environment, and we work only on behalf of the user. So you know exactly what information it (Copilot Cowork) has access ​to.”

On Monday, the company's stock increased by almost 1%. However, as investor confidence in AI declines, the stock is poised for its worst quarter since the global financial crisis of 2008, with a nearly 25% decline.

Microsoft capitalized on the increasing demand for autonomous AI agents earlier this month by releasing Copilot Cowork, a solution based on Anthropic's popular Claude Cowork product, in testing mode.

In the face of fierce competition from rivals like Google (GOOGL.O), the new tab Gemini, and autonomous agents like Claude Cowork, the Windows manufacturer has been rushing to enhance its Copilot assistant to promote greater usage.

AI-Driven Phishing Campaign Exploits Railway to Breach Microsoft Cloud Accounts at Scale

 

Security experts at Huntress report a fast-changing phishing operation using AI tools and cloud systems to breach Microsoft accounts in hundreds of companies. This activity ties back to improper use of Railway, a service that helps people launch apps and websites swiftly. Running on automated workflows, the attack adapts quickly, slipping past common defenses. Instead of relying on old methods, it shifts tactics constantly, making detection harder. Through compromised credentials, access spreads quietly within corporate networks. Investigators found backend processes hosted remotely, fueling repeated login attempts. 

Unlike typical scams, this one uses synthetic voices and generated text to mimic real communication. Some messages appear personalized, increasing their chances of success. Early warnings came from irregular traffic patterns tied to authentication requests. Organizations affected span multiple industries without geographic concentration. Researchers stress monitoring unusual API behavior as a sign of intrusion. Detection now depends more on behavioral anomalies than known threat signatures. 

Starting in early 2026, the attack started quietly before rapidly growing in intensity. Come March, signs showed a sharp rise - dozens of groups breached each day. Though linked to an obscure group using few internet addresses, its impact spread fast. Hundreds of confirmed victims fell within weeks, likely many more worldwide.  

Something different here? The integration of AI to craft phishing bait. Typical assaults lean on reused message formats; by contrast, this one generates unique, tailored texts - some with QR symbols, others embedding shared-file URLs or fake alerts mimicking real platforms. Because each message looks unlike the last, standard filters struggle. Pattern-based defenses fail when there is no clear pattern to catch. 

Not every login attempt follows the usual path. Some intruders step in through a backdoor built for gadgets like printers or streaming boxes. A fake prompt appears, nudging users to approve what seems like a routine connection. Once granted, digital keys are handed out - no password cracking needed. With those credentials, unauthorized entry lasts nearly three months. Security checks such as two-step verification simply do not apply.  

Across sectors like finance, healthcare, and government, effects are widespread. Though Huntress says it stopped further attacks for some customers, the company notes its data probably captures just a small portion of those impacted. Huntress moved quickly, rolling out urgent fixes to about 60,000 Microsoft cloud customers after spotting risky traffic linked to Railway domains. Although unintended, misuse of the platform did occur - Railway admitted this, then paused harmful user profiles while cutting off connected web addresses. Security adjustments limited entry points before further harm could unfold. 

The way bad actors craft digital traps now involves artificial intelligence, running through vast online computing resources. With such technology at hand, launching widespread fake message attacks happens faster than before. Experts observing these shifts note a troubling trend: simpler methods achieving stronger results. What once required skill can now be managed by nearly anyone willing to try. Speed grows. Scale expands. Risk rises accordingly.

Security Alerts or Scams? How to Spot Fake Login Warnings and Protect Your Accounts

 

Your phone buzzes with a notification: “Unusual login activity detected on your account.” It’s enough to make anyone uneasy. But is it a genuine alert about a hacking attempt, or could the message itself be a trap?

Notifications from major platforms like Google, Microsoft, Amazon, or even your bank can be both helpful and risky. While they act as an early warning system against unauthorized access, cybercriminals often exploit this sense of urgency. Fake alerts are designed to trick users into clicking on malicious links and entering sensitive information on fraudulent login pages. Acting impulsively in such moments can unintentionally give attackers access to your accounts.

Understanding Security Alerts

Not every alert signals a compromised account. Many platforms rely on advanced monitoring systems that flag unusual behaviour before any real damage occurs.

These systems may detect:
  • Multiple failed login attempts from different locations
  • Automated attacks using leaked credentials
  • Logins from unfamiliar devices or IP addresses
In many cases, a blocked login attempt simply means the system is working as intended—not that your account has already been breached.

The 3-Second Test: Spotting Real vs Fake Messages

Before clicking on any alert, pause and verify. Even AI-generated phishing emails often fail basic checks:

1. The Sender Check
Always look beyond the display name. Verify the actual email address and domain. Fraudsters often use slight variations like “amazon-support.co.uk” or “service@paypal-hilfe.com
” to appear legitimate.

2. The Hover Trick
On a computer, hover your cursor over any link without clicking. The true destination URL will appear. If it doesn’t match the official website, delete the email immediately.

3. Watch for Panic Tactics
Be cautious of urgent messages such as:
“Act within 10 minutes or your account will be irrevocably deleted!”
Legitimate companies don’t pressure users this way—urgency is a common scam tactic.

Golden Rule: Never click directly from the email. Instead, open your browser, manually type the official website, and log in. If there’s a real issue, it will be visible in your account dashboard.

Using the same password across multiple platforms increases risk. A breach on one website can trigger a domino effect, allowing attackers to access other accounts using the same credentials

The Role of Password Managers

Password managers offer a simple yet powerful solution:

  1. Unique Passwords: They generate strong, complex passwords for each account, ensuring one breach doesn’t compromise everything.
  2. Built-in Phishing Protection: These tools only autofill credentials on legitimate websites, helping you avoid fake login pages.

Tools like Dashlane provide a comprehensive password management experience with seamless autofill and secure password generation. Meanwhile, Bitwarden stands out as a reliable open-source option with robust free features.

Security alerts aren’t always bad news, they often indicate that protective systems are doing their job. The real risk lies in reacting without verification.

By using a password manager and enabling two-factor authentication, you can significantly strengthen your defenses and keep your digital identity secure

“Unhackable” No More: Researcher Demonstrates Hardware-Level Exploit on Xbox One







For years, the Xbox One was widely viewed as one of the few gaming systems that had resisted successful hacking. That perception has now changed after a new hardware-based attack method was publicly demonstrated.

At the RE//verse 2026 event, security researcher Markus Gaasedelen introduced a technique called the “Bliss” double glitch. This method relies on manipulating electrical voltage at precise moments to interfere with the console’s startup process, effectively bypassing its built-in protections.

This marks the first known instance where the Xbox One’s hardware defenses have been broken in a way that others can replicate. The achievement is being compared to the Reset Glitch Hack that affected the Xbox 360, although this newer approach operates at a deeper level. Instead of targeting software vulnerabilities, it directly interferes with the boot ROM, a core component embedded in the console’s chip. By doing so, the exploit grants complete control over the system, including its most secure layers such as the hypervisor.

When the Xbox One was introduced in 2013, Microsoft designed it with an unusually strong security model. The system relied on multiple layers of encryption and authentication, linking firmware, the operating system, and game files into a tightly controlled verification chain. Within the company, it was even described as one of the most secure products Microsoft had ever built.

A substantial part of this design was its secure boot process. Unlike the Xbox 360, which was compromised through reset-line manipulation, the Xbox One removed such external entry points. It also incorporated a dedicated ARM-based security processor responsible for verifying every stage of the startup sequence. Without valid cryptographic signatures, no code was allowed to run. For many years, this approach appeared highly effective.

Rather than attacking these higher-level protections, the researcher focused on the physical behavior of the hardware itself. Traditional glitching techniques rely on disrupting timing signals, but the Xbox One’s architecture left little opportunity for that. Instead, the method used here involves voltage glitching, where the power supplied to the processor is briefly disrupted.

These momentary drops in voltage can cause the processor to behave unpredictably, such as skipping instructions or misreading operations. However, the timing must be extremely precise, as even a tiny variation can result in failure or system crashes.

To achieve this level of accuracy, specialized hardware tools were developed to monitor and control electrical signals within the system. This allowed the researcher to closely observe how the console behaves at the silicon level and identify the exact points where interference would be effective.

The resulting “Bliss” technique uses two carefully timed voltage disruptions during the startup process. The first interferes with memory protection mechanisms managed by the ARM Cortex subsystem. The second targets a memory-copy operation that occurs while the system is loading initial data. If both steps are executed correctly, the system is redirected to run code chosen by the attacker, effectively taking control of the boot process.

Unlike many modern exploits, this method does not depend on software flaws that can be corrected through updates. Instead, it targets the boot ROM, which is permanently embedded in the chip during manufacturing. Because this code cannot be modified, the vulnerability cannot be patched. As a result, the exploit allows unauthorized code execution across all system layers, including protected components.

With this level of access, it becomes possible to run alternative operating systems, extract encrypted firmware, and analyze internal system data. This has implications for both security research and digital preservation, as it enables deeper understanding of the console’s architecture and may support efforts to emulate its environment in the future.

Beyond research applications, the findings may also lead to practical tools. There is speculation that the technique could be adapted into hardware modifications similar to modchips, which automate the precise electrical conditions needed for the exploit. Such developments could revive longstanding debates around console modification and software control.

From a security perspective, the immediate impact on Microsoft may be limited, as the Xbox One is no longer the company’s latest platform. Newer systems have adopted updated security designs based on similar principles. However, the discovery serves a lesson for the industry: no system can be considered permanently secure, especially when attacks target the underlying hardware itself.

North Korean Hackers Turn VS Code Projects Into Silent Malware Triggers

 


Opening a project in a code editor is supposed to be routine. In this case, it is enough to trigger a full malware infection.

Security researchers have linked an ongoing campaign associated with North Korean actors, tracked as Contagious Interview or WaterPlum, to a malware family known as StoatWaffle. Instead of relying on software vulnerabilities, the group is embedding malicious logic directly into Microsoft Visual Studio Code (VS Code) projects, turning a trusted development tool into the starting point of an attack.

The entire mechanism is hidden inside a file developers rarely question: tasks.json. This file is typically used to automate workflows. In these attacks, it has been configured with a setting that forces execution the moment a project folder is opened. No manual action is required beyond opening the workspace.

Research from NTT Security shows that the embedded task connects to an external web application, previously hosted on Vercel, to retrieve additional data. The same task operates consistently regardless of the operating system, meaning the behavior does not change between environments even though most observed cases involve Windows systems.

Once triggered, the malware checks whether Node.js is installed. If it is not present, it downloads and installs it from official sources. This ensures the system can execute the rest of the attack chain without interruption.

What follows is a staged infection process. A downloader repeatedly contacts a remote server to fetch additional payloads. Each stage behaves in the same way, reaching out to new endpoints and executing the returned code as Node.js scripts. This creates a recursive chain where one payload continuously pulls in the next.

StoatWaffle is built as a modular framework. One component is designed for data theft, extracting saved credentials and browser extension data from Chromium-based browsers and Mozilla Firefox. On macOS systems, it also targets the iCloud Keychain database. The collected information is then sent to a command-and-control server.

A second module functions as a remote access trojan, allowing attackers to operate the infected system. It supports commands to navigate directories, list and search files, execute scripts, upload data, run shell commands, and terminate itself when required.

Researchers note that the malware is not static. The operators are actively refining it, introducing new variants and updating existing functionality.

The VS Code-based delivery method is only one part of a broader campaign aimed at developers and the open-source ecosystem. In one instance, attackers distributed malicious npm packages carrying a Python-based backdoor called PylangGhost, marking its first known propagation through npm.

Another campaign, known as PolinRider, involved injecting obfuscated JavaScript into hundreds of public GitHub repositories. That code ultimately led to the deployment of an updated version of BeaverTail, a malware strain already linked to the same threat activity.

A more targeted compromise affected four repositories within the Neutralinojs GitHub organization. Attackers gained access by hijacking a contributor account with elevated permissions and force-pushed malicious code. This code retrieved encrypted payloads hidden within blockchain transactions across networks such as Tron, Aptos, and Binance Smart Chain, which were then used to download and execute BeaverTail. Victims are believed to have been exposed through malicious VS Code extensions or compromised npm packages.

According to analysis from Microsoft, the initial compromise often begins with social engineering rather than technical exploitation. Attackers stage convincing recruitment processes that closely resemble legitimate technical interviews. Targets are instructed to run code hosted on platforms such as GitHub, GitLab, or Bitbucket, unknowingly executing malicious components as part of the assessment.

The individuals targeted are typically experienced professionals, including founders, CTOs, and senior engineers in cryptocurrency and Web3 sectors. Their level of access to infrastructure and digital assets makes them especially valuable. In one recent case, attackers unsuccessfully attempted to compromise the founder of AllSecure.io using this approach.

Multiple malware families are used across these attack chains, including OtterCookie, InvisibleFerret, and FlexibleFerret. InvisibleFerret is commonly delivered through BeaverTail, although recent intrusions show it being deployed after initial access is established through OtterCookie. FlexibleFerret, also known as WeaselStore, exists in both Go and Python variants, referred to as GolangGhost and PylangGhost.

The attackers continue to adjust their techniques. Newer versions of the malicious VS Code projects have moved away from earlier infrastructure and now rely on scripts hosted on GitHub Gist to retrieve additional payloads. These ultimately lead to the deployment of FlexibleFerret. The infected projects themselves are distributed through GitHub repositories.

Security analysts warn that placing malware inside tools developers already trust significantly lowers suspicion. When the code is presented as part of a hiring task or technical assessment, it is more likely to be executed, especially under time pressure.

Microsoft has responded to the misuse of VS Code tasks with security updates. In the January 2026 release (version 1.109), a new setting disables automatic task execution by default, preventing tasks defined in tasks.json from running without user awareness. This setting cannot be overridden at the workspace level, limiting the ability of malicious repositories to bypass protections.

Additional safeguards were introduced in February 2026 (version 1.110), including a second prompt that alerts users when an auto-run task is detected after workspace trust is granted.

Beyond development environments, North Korean-linked operations have expanded into broader social engineering campaigns targeting cryptocurrency professionals. These include outreach through LinkedIn, impersonation of venture capital firms, and fake video conferencing links. Some attacks lead to deceptive CAPTCHA pages that trick victims into executing hidden commands in their terminal, enabling cross-platform infections on macOS and Windows. These activities overlap with clusters tracked as GhostCall and UNC1069.

Separately, the U.S. Department of Justice has taken action against individuals involved in supporting North Korea’s fraudulent IT worker operations. Audricus Phagnasay, Jason Salazar, and Alexander Paul Travis were sentenced after pleading guilty in November 2025. Two received probation and fines, while one was sentenced to prison and ordered to forfeit more than $193,000 obtained through identity misuse.

Officials stated that such schemes enable North Korean operatives to generate revenue, access corporate systems, steal proprietary data, and support broader cyber operations. Separate research from Flare and IBM X-Force indicates that individuals involved in these programs undergo rigorous training and are considered highly skilled, forming a key part of the country’s strategic cyber efforts.


What this means

This attack does not depend on exploiting a flaw in software. It depends on exploiting trust.

By embedding malicious behavior into tools, workflows, and hiring processes that developers rely on every day, attackers are shifting the point of compromise. In this environment, opening a project can be just as risky as running an unknown program.

Microsoft Alerts 29,000 Users Hit by IRS-Themed Phishing Wave

 

Microsoft is warning of a major IRS‑themed phishing wave that hit 29,000 users in a single day, using tax‑season panic to steal credentials and deploy remote access malware. The campaigns piggyback on the urgency of the U.S. tax season, sending emails that pretend to be refund notices, payroll forms, filing reminders, or messages from tax professionals to pressure recipients into acting quickly.

According to Microsoft Threat Intelligence and Defender researchers, some lures target regular taxpayers for financial data, while others focus on accountants and professionals who routinely handle sensitive tax documents and are used to receiving legitimate tax‑related mail.Many of these messages direct users either to phishing pages built on Phishing‑as‑a‑Service platforms like the Energy365 kit or to downloads that silently install remote monitoring and management (RMM) tools. 

In one large campaign unearthed on February 10, 2026, more than 29,000 users across 10,000 organizations were targeted in just a day, with about 95% of victims located in the U.S. The emails impersonated the Internal Revenue Service and claimed that irregular tax returns had been filed under the recipient’s Electronic Filing Identification Number, pushing them to urgently review those returns. Sectors hit hardest included financial services, technology and software, and retail and consumer goods, reflecting the high value of the data and access that successful compromises could deliver to attackers. 

Victims were instructed to download a supposed “IRS Transcript Viewer” via a button labeled “Download IRS Transcript View 5.1,” which actually redirected to smartvault[.]im, a domain posing as legitimate document platform SmartVault. The site used Cloudflare protections so that automated scanners saw a benign front, while real users received a maliciously packaged ScreenConnect installer that gave attackers remote access to their systems. Once installed, this RMM tooling enabled data theft, credential harvesting, and further post‑exploitation such as lateral movement or deploying additional malware. 

Microsoft also highlights related tax‑themed tactics: CPA‑style lures tied to the Energy365 phishing kit, bogus tax‑themed domains that push ScreenConnect, and cryptocurrency‑tax emails that impersonate the IRS and distribute ScreenConnect or SimpleHelp via malicious domains like “irs-doc[.]com” and “gov-irs216[.]net.” In some cases, attackers emailed accountants and organizations asking for help filing taxes, then funneled them to Datto RMM installers under the guise of sharing documentation. Collectively, these methods show a trend of abusing legitimate RMM platforms for stealthy, persistent access instead of relying solely on traditional malware. 

To defend against these threats, Microsoft advises organizations to enforce two‑factor authentication on all accounts, implement conditional access policies, and harden email security to better scan attachments, links, and visited websites. They also recommend blocking access to known malicious domains, monitoring networks and endpoints for unauthorized RMM tools like ScreenConnect, Datto, and SimpleHelp, and educating users—especially finance and tax staff—on spotting urgent, tax‑themed emails that request downloads or credentials.

Microsoft Releases Hotpatch to Fix Windows 11 RRAS Remote Code Flaw



Microsoft has issued an out-of-band (OOB) security update to remediate critical vulnerabilities affecting a specific subset of Windows 11 Enterprise systems that rely on hotpatch updates instead of the conventional monthly Patch Tuesday cumulative updates.

The update, identified as KB5084597, was released to fix multiple security flaws in the Windows Routing and Remote Access Service (RRAS), a built-in administrative tool used for configuring and managing remote connectivity and routing functions within enterprise networks. According to Microsoft’s official advisory, these vulnerabilities could allow remote code execution if a system connects to a malicious or attacker-controlled server through the RRAS management interface.

Microsoft clarified that the risk is limited to narrowly defined scenarios. The exposure primarily impacts Enterprise client devices that are enrolled in the hotpatch update model and are actively used for remote server management. This means that the vulnerability does not broadly affect all Windows users, but rather a specific operational environment where administrative tools interact with external systems.

The vulnerabilities addressed in this update are tracked under three identifiers: CVE-2026-25172, CVE-2026-25173, and CVE-2026-26111. These issues were initially resolved as part of Microsoft’s March 2026 Patch Tuesday updates, which were released on March 10. However, the original fixes required system reboots to be fully applied.

Microsoft’s technical description indicates that successful exploitation would require an attacker to already possess authenticated access within a domain. The attacker could then use social engineering techniques to trick a domain-joined user into initiating a connection request to a malicious server via the RRAS snap-in management tool. Once the connection is made, the vulnerability could be triggered, allowing the attacker to execute arbitrary code on the targeted system.

The KB5084597 hotpatch is cumulative in nature, meaning it incorporates all previously released fixes and improvements included in the March 2026 security update package. This ensures that systems receiving the hotpatch are brought up to the same security level as those that installed the full cumulative update.

A key reason for releasing this hotpatch separately is the operational challenge associated with system restarts. Many enterprise environments run mission-critical workloads where even brief downtime can disrupt services, impact business continuity, or affect essential infrastructure. Traditional cumulative updates require a reboot, making them less practical in such contexts.

Hotpatching addresses this challenge by applying security fixes directly into the memory of running processes. This allows vulnerabilities to be mitigated immediately without interrupting system operations. Simultaneously, the update also modifies the relevant files stored on disk so that the fixes remain effective after the next scheduled reboot, maintaining long-term system integrity.

Microsoft also noted that while fixes for these vulnerabilities had been released earlier, the hotpatch update was reissued to ensure more comprehensive protection across all affected deployment scenarios. This suggests that the company identified gaps in earlier coverage or aimed to standardize protection for systems using different update mechanisms.

It is important to note that this hotpatch is not distributed to all devices. It is only available to systems that are enrolled in Microsoft’s hotpatch update program and are managed through Windows Autopatch, a cloud-based service that automates update deployment for enterprise environments. Eligible systems will receive and apply the update automatically, without requiring user intervention or a system restart.

From a broader security standpoint, this development surfaces the increasing complexity of patch management in modern enterprise environments. As organizations adopt high-availability systems that must remain continuously operational, traditional update strategies are evolving to include alternatives such as hotpatching.

At the same time, vulnerabilities in administrative tools like RRAS demonstrate how trusted system components can become entry points for attackers when combined with social engineering and authenticated access. Even though exploitation requires specific conditions, the potential impact remains substantial due to the elevated privileges typically associated with administrative tools.

Security experts generally emphasize that organizations must go beyond simply applying patches. Continuous monitoring, strict access control policies, and user awareness training are essential to reducing the likelihood of such attack scenarios. Additionally, maintaining visibility into how administrative tools are used within a network can help detect unusual behavior before it leads to compromise.

Overall, Microsoft’s release of this hotpatch reflects both the urgency of addressing critical vulnerabilities and the need to adapt security practices to environments where uptime is as important as protection.

Hackers Abuse OAuth Flaws for Microsoft Malware Delivery

 

Microsoft has warned that hackers are weaponizing OAuth error flows to redirect users from trusted Microsoft login pages to malicious sites that deliver malware. The campaigns, observed by Microsoft Defender researchers, primarily target government and public-sector organizations using phishing emails that appear to be legitimate Microsoft notifications or service messages. By abusing how OAuth 2.0 handles authorization errors and redirects, attackers are able to bypass many email and browser phishing protections that normally block suspicious URLs. This turns a standards-compliant identity feature into a powerful tool for malware distribution and account compromise. 

The attack begins with threat actors registering malicious OAuth applications in a tenant they control and configuring them with redirect URIs that point to attacker infrastructure. Victims receive phishing links that invoke Microsoft Entra ID authorization endpoints, which visually resemble legitimate sign-in flows, increasing user trust. The attackers craft these URLs with parameters for silent authentication and intentionally invalid scopes, which trigger an OAuth error instead of a normal sign-in. Rather than breaking the flow, this error causes the identity provider to follow the standard and redirect the user to the attacker-controlled redirect URI. 

Once redirected, victims may land on advanced phishing pages powered by attacker-in-the-middle frameworks such as EvilProxy, allowing threat actors to harvest valid session cookies and bypass multi-factor authentication. Microsoft notes that the attackers misuse the OAuth “state” parameter to automatically pre-fill the victim’s email address on the phishing page, making it look more authentic and reducing friction for the user. In other cases, the redirect leads to a “/download” path that automatically serves a ZIP archive containing malicious shortcut (LNK) files and HTML smuggling components. These variations show how the same redirection trick can support both credential theft and direct malware delivery. 

If a victim opens the malicious LNK file, it launches PowerShell to perform reconnaissance on the compromised host and stage the next phase of the attack. The script extracts components needed for DLL side-loading, where a legitimate executable is abused to load a malicious library. In this campaign, a rogue DLL named crashhandler.dll decrypts and loads the final payload crashlog.dat directly into memory, while a benign-looking binary (stream_monitor.exe) displays a decoy application to distract the user. This technique helps attackers evade traditional antivirus tools and maintain stealthy, in-memory persistence. 

Microsoft stresses that these are identity-based threats that exploit intended behaviors in the OAuth specification rather than exploiting a software vulnerability. The company recommends tightening permissions for OAuth applications, enforcing strong identity protections and Conditional Access policies, and applying cross-domain detection that correlates email, identity, and endpoint signals. Organizations should also closely monitor application registrations and unusual OAuth consent flows to spot malicious apps early. As this abuse of standards-compliant error handling is now active in real-world campaigns, defenders must treat OAuth flows themselves as a critical attack surface, not just a background authentication detail.

Windows Telemetry Explained: What Diagnostic Data Microsoft Collects and Why It Matters

 

Years after Windows 10 arrived, a single aspect keeps stirring conversation - telemetry. This data gathering, labeled diagnostic info by Microsoft, pulls details from machines without manual input. Its purpose? Keeping systems stable, secure, running smoothly. Yet reactions split sharply between everyday users and those watching privacy trends. 

Early on, after Windows 10 arrived, observers questioned whether its telemetry might double as monitoring. A few writers argued it collected large amounts of user detail while transmitting data to Microsoft machines. Still, analysts inspecting how the OS handles information report minimal proof backing such suspicions. 

Beginning in 2017, scrutiny from the Dutch Data Protection Authority revealed shortcomings in how Windows presented telemetry consent choices. Although designed to gather system performance details, the setup failed to align with regional privacy expectations due to unclear user permissions. 

Instead of defending the original design, Microsoft adjusted both interface wording and backend configurations. Following these updates, oversight bodies acknowledged improvements, noting no evidence emerged suggesting private information was gathered unlawfully. Independent analysts alongside regulatory teams had previously flagged the configuration, yet after revisions, compliance concerns faded gradually. 

What runs behind the scenes in Windows includes a mix of telemetry types - mainly split into essential and extra reporting layers. Most personal computers, especially those outside corporate control, turn on the basic tier automatically; there exists no standard menu option to switch it off entirely. This baseline layer gathers only what Microsoft claims is vital for stability and core operations. 

Though hidden from typical adjustments, its presence supports ongoing performance checks across devices. Basic troubleshooting relies on specific diagnostics tied to functions like Windows Update. Information might cover simple fault summaries, setup traits of hardware, software plus driver footprints, along with records tracking how updates succeed or fail. 

As noted by Microsoft, insights drawn support better stability fixes, safety patches, app alignment, and smoother running systems. Some diagnostic details go beyond basics, capturing patterns in app use or web habits. These insights might involve deeper system errors, performance signs, or hardware traits. 

While such data helps refine functionality, access remains under user control via Windows options. Those cautious about personal information often choose to turn this off. Control sits within settings, letting choices match comfort levels. Occasionally, memory dumps taken during system failures form part of Optional diagnostic data, according to experts. 

When a crash happens, pieces of active files might get saved inside these records. Because of this risk, certain groups managing confidential material prefer disabling the setting altogether. In 2018, Microsoft rolled out a feature named Diagnostic Data Viewer to boost openness. This tool gives people access to review what information their machine shares with the company, revealing specifics found in diagnostics and system summaries. 

One billion devices now operate on Windows 11 across the globe. Because of countless variations in hardware and software setups, Microsoft relies on telemetry data - this information reveals issues, shapes update improvements, yet supports consistent performance. While tracking user interactions might sound intrusive, it actually guides fixes without exposing personal details; instead, patterns emerge that steer engineering decisions behind the scenes. 

Even though some diagnostic details are essential for basic operations, those worried about personal data might choose to limit what gets sent by turning off non-essential diagnostics in device preferences. Still, full function depends on keeping certain reporting active.

New Copilot Setting May Access Activity From Other Microsoft Services. Here’s How Users Can Disable It

 



A recently noticed configuration inside Microsoft Copilot may allow the AI tool to reference activity from several other Microsoft platforms, prompting renewed discussion around data privacy and AI personalization. The option, which appears within Copilot’s settings, enables the assistant to use information connected to services such as Bing, MSN, and the Microsoft Edge browser. Users who are uncomfortable with this level of integration can switch the feature off.

Like many modern artificial intelligence systems, Copilot attempts to improve the usefulness of its responses by understanding more about the person interacting with it. The assistant normally does this by remembering past conversations and storing certain details that users intentionally share during chats. These stored elements help the AI maintain context across multiple interactions and generate responses that feel more tailored.

However, a specific configuration called “Microsoft usage data” expands that capability. According to reporting first highlighted by the technology outlet Windows Latest, this setting allows Copilot to reference information associated with other Microsoft services a user has interacted with. The option appears within the assistant’s Memory controls and is available through both the Copilot website and its mobile applications. Observers believe the setting was introduced recently as part of Microsoft’s effort to strengthen personalization features in its AI tools.

The Memory feature in Copilot is designed to help the assistant retain useful context. Through this system, the AI can recall earlier conversations, remember instructions or factual information shared by users, and potentially reference certain account-linked activity from other Microsoft products. The idea is that by understanding more about a user’s interests or previous discussions, the assistant can provide more relevant answers.

In practice, such capabilities can be helpful. For instance, a user who discussed a topic with Copilot previously may want to continue that conversation later without repeating the entire background. Similarly, individuals seeking guidance about personal or professional matters may receive more relevant suggestions if the assistant has some awareness of their preferences or circumstances.

Despite the convenience, the feature also raises questions about privacy. Some users may be concerned that allowing an AI assistant to accumulate information from multiple services could expose more personal data than expected. Others may want to know how that information is used beyond personalizing conversations.

Microsoft addresses these concerns in its official Copilot documentation. In its frequently asked questions section, the company states that user conversations are processed only for limited purposes described in its privacy policies. According to Microsoft, this information may be used to evaluate Copilot’s performance, troubleshoot operational issues, identify software bugs, prevent misuse of the service, and improve the overall quality of the product.

The company also says that conversations are not used to train AI models by default. Model training is controlled through a separate configuration, which users can choose to disable if they do not want their interactions contributing to AI development.

Microsoft further clarifies that Copilot’s personalization settings do not determine whether a user receives targeted advertisements. Advertising preferences are managed through a different option available in the Microsoft account privacy dashboard. Users who want to stop personalized advertising must adjust the Personalized ads and offers setting separately.

Even with these explanations, privacy concerns remain understandable, particularly because Microsoft documentation indicates that Copilot’s personalization features may already be activated automatically in some cases. When reviewing the settings on a personal device, these options were found to be switched on. Users who prefer not to allow Copilot to access broader usage data may therefore wish to disable them.

Checking these settings is straightforward. Users can open Copilot through its website or mobile application and ensure they are signed in with their Microsoft account. On the web interface, selecting the account name at the bottom of the left-hand panel opens the Settings menu, where the Memory section can be accessed. In the mobile application, the same controls are available through the side navigation menu by tapping the account name and choosing Memory.

Inside the Memory settings, users will see a general control labeled “Personalization and memory.” Two additional options appear beneath it: “Facts you’ve shared,” which stores information provided directly during conversations, and “Microsoft usage data,” which allows Copilot to reference activity from other Microsoft services.

To limit this behavior, users can switch off the Microsoft usage data toggle. They may also disable the broader Personalization and memory option if they prefer that the AI assistant does not retain contextual information about their interactions. Copilot also provides a “Delete all memory” function that removes all stored data from the system. If individual personal details have been recorded, they can be reviewed and deleted through the editing option next to “Facts you’ve shared.”

Security and privacy experts generally advise caution when sharing information with AI assistants, even when personalization features remain enabled. Sensitive or confidential details should not be entered into conversations. Microsoft itself recommends avoiding the disclosure of certain types of highly personal data, including information related to health conditions or sexual orientation.

The broader development reflects a growing trend in the technology industry. As AI assistants become integrated across multiple platforms and services, companies are increasingly using cross-service data to make these tools more helpful and personalized. While this approach can improve convenience and usability, it also underlines the grave necessity for transparent privacy controls so users remain aware of how their information is being used and can adjust those settings when necessary.

Microsoft Report Reveals Hackers Exploit AI In Cyberattacks


According to Microsoft, hackers are increasingly using AI in their work to increase attacks, scale cyberattack activity, and limit technical barriers throughout all aspects of a cyberattack. 

Microsoft’s new Threat Intelligence report reveals that threat actors are using genAI tools for various tasks, such as phishing, surveillance, malware building, infrastructure development, and post-hack activity. 

About the report

In various incidents, AI helps to create phishing emails, summarize stolen information, debug malware, translate content, and configure infrastructure. “Microsoft Threat Intelligence has observed that most malicious use of AI today centers on using language models for producing text, code, or media. Threat actors use generative AI to draft phishing lures, translate content, summarize stolen data, generate or debug malware, and scaffold scripts or infrastructure,” the report said. 

"For these uses, AI functions as a force multiplier that reduces technical friction and accelerates execution, while human operators retain control over objectives, targeting, and deployment decisions,’ warns Microsoft.

AI in cyberattacks 

Microsoft found different hacking gangs using AI in their cyberattacks, such as North Korean hackers known as Coral Sleet (Storm-1877) and Jasper Sleet (Storm-0287), who use the AI in their remote IT worker scams. 

The AI helps to make realistic identities, communications, and resumes to get a job in Western companies and have access once hired. Microsoft also explained how AI is being exploited in malware development and infrastructure creation. Threat actors are using AI coding tools to create and refine malicious code, fix errors, and send malware components to different programming languages. 

The impact

A few malware experiments showed traces of AI-enabled malware that create scripts or configure behaviour at runtime. Microsoft found Coral Sleet using AI to make fake company sites, manage infrastructure, and troubleshoot their installations. 

When security analysts try to stop the use of AI in these attacks, Microsoft says hackers are using jailbreaking techniques to trick AI into creating malicious code or content. 

Besides generative AI use, the report revealed that hackers experiment with agentic AI to do tasks autonomously. The AI is mainly used for decision-making currently. As IT worker campaigns depend on the exploitation of authentic access, experts have advised organizations to address these attacks as insider risks. 

Microsoft Copilot Bug Exposes Confidential Outlook Emails

 
























A critical bug in Microsoft 365 Copilot, tracked as CW1226324, allowed the AI assistant to access and summarize confidential emails in Outlook's Sent Items and Drafts folders, bypassing sensitivity labels and Data Loss Prevention (DLP) policies. Microsoft first detected the issue on January 21, 2026, with exposure lasting from late January until early to mid-February 2026. This flaw affected enterprise users worldwide, including organizations like the UK's NHS, despite protections meant to block AI from processing sensitive data.

 The vulnerability stemmed from a code error that ignored confidentiality labels on user-authored emails stored in desktop Outlook.When users queried Copilot Chat, it retrieved and summarized content from these folders, potentially including business contracts, legal documents, police investigations, and health records. Importantly, the bug did not grant unauthorized access; summaries only appeared to users already permitted to view the mailbox. However, feeding such data into a large language model raised fears of unintended processing or training data incorporation.

Microsoft swiftly responded by deploying a global configuration update in early February 2026, restoring proper exclusion of protected content from Copilot. The company continues monitoring rollout and contacting affected customers for verification, though no full remediation timeline or user impact numbers have been disclosed.As of late February, the patch was in place for most enterprise accounts, tagged as a limited-scope advisory.

This incident underscores persistent AI privacy risks in enterprise tools, marking the second Copilot-related email exposure in eight months—the prior EchoLeak involved prompt injection attacks. It highlights how even brief bugs can erode trust in AI assistants handling confidential workflows. Security experts urge organizations to audit DLP configurations and monitor AI behaviors closely.

For Microsoft 365 users, especially in high-stakes sectors like healthcare and finance, the event emphasizes the need for robust sensitivity labeling and regular Copilot audits. While fixed, expanded DLP enforcement across storage locations won't complete until late April 2026. Businesses should prioritize data governance to mitigate future AI flaws, ensuring productivity doesn't compromise security.

Microsoft AI Chief: 18 Months to Automate White-Collar Jobs

 

Mustafa Suleyman, CEO of Microsoft AI, has issued a stark warning about the future of white-collar work. In a recent Financial Times interview, he predicted that AI will achieve human-level performance on most professional tasks within 18 months, automating jobs involving computer-based work like accounting, legal analysis, marketing, and project management. This timeline echoes concerns from AI leaders, comparing the shift to the pre-pandemic moment in early 2020 but far more disruptive. Suleyman attributes this to exponential growth in computational power, enabling AI to outperform humans in coding and beyond.

Suleyman's forecast revives 2025 predictions from tech executives. Anthropic's Dario Amodei warned AI could eliminate half of entry-level white-collar jobs, while Ford's Jim Farley foresaw a 50% cut in U.S. white-collar roles. Elon Musk recently suggested artificial general intelligence—AI surpassing human intelligence—could arrive this year. These alarms contrast with CEO silence earlier, likened by The Atlantic to ignoring a shark fin in the water. The drumbeat of disruption is growing louder amid rapid AI advances.

Current AI impact on offices remains limited despite hype. A 2025 Thomson Reuters report shows lawyers and accountants using AI for tasks like document review, yielding only marginal productivity gains without mass displacement. Some studies even indicate setbacks: a METR analysis found AI slowed software developers by 20%. Economic benefits are mostly in Big Tech, with profit margins up over 20% in Q4 2025, while broader indices like the Bloomberg 500 show no change.

Early job losses signal brewing changes. Challenger, Gray & Christmas reported 55,000 AI-related cuts in 2025, including Microsoft's 15,000 layoffs as CEO Satya Nadella pushed to "reimagine" for the AI era. Markets reacted sharply last week with a "SaaSpocalypse" selloff in software stocks after Anthropic and OpenAI launched agentic AI systems mimicking SaaS functions. Investors doubt AI will boost non-tech earnings, per Wall Street consensus.

Suleyman envisions customizable AI transforming every organization. He predicts users will design models like podcasts or blogs, tailored for any job, driving his push for Microsoft "superintelligence" and independent foundation models. As the "most important technology of our time," Suleyman aims to reduce reliance on partners like OpenAI. This could redefine the American Dream, once fueled by MBAs and law degrees, urging urgent preparation for AI's white-collar reckoning.

GitHub Fixes AI Flaw That Could Have Exposed Private Repository Tokens

 



A now-patched security weakness in GitHub Codespaces revealed how artificial intelligence tools embedded in developer environments can be manipulated to expose sensitive credentials. The issue, discovered by cloud security firm Orca Security and named RoguePilot, involved GitHub Copilot, the AI coding assistant integrated into Codespaces. The flaw was responsibly disclosed and later fixed by Microsoft, which owns GitHub.

According to researchers, the attack could begin with a malicious GitHub issue. An attacker could insert concealed instructions within the issue description, specifically crafted to influence Copilot rather than a human reader. When a developer launched a Codespace directly from that issue, Copilot automatically processed the issue text as contextual input. This created an opportunity for hidden instructions to silently control the AI agent operating within the development environment.

Security experts classify this method as indirect or passive prompt injection. In such attacks, harmful instructions are embedded inside content that a large language model later interprets. Because the model treats that content as legitimate context, it may generate unintended responses or perform actions aligned with the attacker’s objective.

Researchers also described RoguePilot as a form of AI-mediated supply chain attack. Instead of exploiting external software libraries, the attacker leverages the AI system integrated into the workflow. GitHub allows Codespaces to be launched from repositories, commits, pull requests, templates, and issues. The exposure occurred specifically when a Codespace was opened from an issue, since Copilot automatically received the issue description as part of its prompt.

The manipulation could be hidden using HTML comment tags, which are invisible in rendered content but still readable by automated systems. Within those hidden segments, an attacker could instruct Copilot to extract the repository’s GITHUB_TOKEN, a credential that provides elevated permissions. In one demonstrated scenario, Copilot could be influenced to check out a specially prepared pull request containing a symbolic link to an internal file. Through techniques such as referencing a remote JSON schema, the AI assistant could read that internal file and transmit the privileged token to an external server.

The RoguePilot disclosure comes amid broader concerns about AI model alignment. Separate research from Microsoft examined a reinforcement learning method called Group Relative Policy Optimization, or GRPO. While typically used to fine-tune large language models after deployment, researchers found it could also weaken safety safeguards, a process they labeled GRP-Obliteration. Notably, training on even a single mildly problematic prompt was enough to make multiple language models more permissive across harmful categories they had never explicitly encountered.

Additional findings stress upon side-channel risks tied to speculative decoding, an optimization technique that allows models to generate multiple candidate tokens simultaneously to improve speed. Researchers found this process could potentially reveal conversation topics or identify user queries with significant accuracy.

Further concerns were raised by AI security firm HiddenLayer, which documented a technique called ShadowLogic. When applied to agent-based systems, the concept evolves into Agentic ShadowLogic. This approach involves embedding backdoors at the computational graph level of a model, enabling silent modification of tool calls. An attacker could intercept and reroute requests through infrastructure under their control, monitor internal endpoints, and log data flows without disrupting normal user experience.

Meanwhile, Neural Trust demonstrated an image-based jailbreak method known as Semantic Chaining. This attack exploits limited reasoning depth in image-generation models by guiding them through a sequence of individually harmless edits that gradually produce restricted or offensive content. Because each step appears safe in isolation, safety systems may fail to detect the evolving harmful intent.

Researchers have also introduced the term Promptware to describe a new category of malicious inputs designed to function like malware. Instead of exploiting traditional code vulnerabilities, promptware manipulates large language models during inference to carry out stages of a cyberattack lifecycle, including reconnaissance, privilege escalation, persistence, command-and-control communication, lateral movement, and data exfiltration.

Collectively, these findings demonstrate that AI systems embedded in development platforms are becoming a new attack surface. As organizations increasingly rely on intelligent automation, safeguarding the interaction between user input, AI interpretation, and system permissions is critical to preventing misuse within trusted workflows.