Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Microsoft. Show all posts

Microsoft Copilot Bug Exposes Confidential Outlook Emails

 
























A critical bug in Microsoft 365 Copilot, tracked as CW1226324, allowed the AI assistant to access and summarize confidential emails in Outlook's Sent Items and Drafts folders, bypassing sensitivity labels and Data Loss Prevention (DLP) policies. Microsoft first detected the issue on January 21, 2026, with exposure lasting from late January until early to mid-February 2026. This flaw affected enterprise users worldwide, including organizations like the UK's NHS, despite protections meant to block AI from processing sensitive data.

 The vulnerability stemmed from a code error that ignored confidentiality labels on user-authored emails stored in desktop Outlook.When users queried Copilot Chat, it retrieved and summarized content from these folders, potentially including business contracts, legal documents, police investigations, and health records. Importantly, the bug did not grant unauthorized access; summaries only appeared to users already permitted to view the mailbox. However, feeding such data into a large language model raised fears of unintended processing or training data incorporation.

Microsoft swiftly responded by deploying a global configuration update in early February 2026, restoring proper exclusion of protected content from Copilot. The company continues monitoring rollout and contacting affected customers for verification, though no full remediation timeline or user impact numbers have been disclosed.As of late February, the patch was in place for most enterprise accounts, tagged as a limited-scope advisory.

This incident underscores persistent AI privacy risks in enterprise tools, marking the second Copilot-related email exposure in eight months—the prior EchoLeak involved prompt injection attacks. It highlights how even brief bugs can erode trust in AI assistants handling confidential workflows. Security experts urge organizations to audit DLP configurations and monitor AI behaviors closely.

For Microsoft 365 users, especially in high-stakes sectors like healthcare and finance, the event emphasizes the need for robust sensitivity labeling and regular Copilot audits. While fixed, expanded DLP enforcement across storage locations won't complete until late April 2026. Businesses should prioritize data governance to mitigate future AI flaws, ensuring productivity doesn't compromise security.

Microsoft AI Chief: 18 Months to Automate White-Collar Jobs

 

Mustafa Suleyman, CEO of Microsoft AI, has issued a stark warning about the future of white-collar work. In a recent Financial Times interview, he predicted that AI will achieve human-level performance on most professional tasks within 18 months, automating jobs involving computer-based work like accounting, legal analysis, marketing, and project management. This timeline echoes concerns from AI leaders, comparing the shift to the pre-pandemic moment in early 2020 but far more disruptive. Suleyman attributes this to exponential growth in computational power, enabling AI to outperform humans in coding and beyond.

Suleyman's forecast revives 2025 predictions from tech executives. Anthropic's Dario Amodei warned AI could eliminate half of entry-level white-collar jobs, while Ford's Jim Farley foresaw a 50% cut in U.S. white-collar roles. Elon Musk recently suggested artificial general intelligence—AI surpassing human intelligence—could arrive this year. These alarms contrast with CEO silence earlier, likened by The Atlantic to ignoring a shark fin in the water. The drumbeat of disruption is growing louder amid rapid AI advances.

Current AI impact on offices remains limited despite hype. A 2025 Thomson Reuters report shows lawyers and accountants using AI for tasks like document review, yielding only marginal productivity gains without mass displacement. Some studies even indicate setbacks: a METR analysis found AI slowed software developers by 20%. Economic benefits are mostly in Big Tech, with profit margins up over 20% in Q4 2025, while broader indices like the Bloomberg 500 show no change.

Early job losses signal brewing changes. Challenger, Gray & Christmas reported 55,000 AI-related cuts in 2025, including Microsoft's 15,000 layoffs as CEO Satya Nadella pushed to "reimagine" for the AI era. Markets reacted sharply last week with a "SaaSpocalypse" selloff in software stocks after Anthropic and OpenAI launched agentic AI systems mimicking SaaS functions. Investors doubt AI will boost non-tech earnings, per Wall Street consensus.

Suleyman envisions customizable AI transforming every organization. He predicts users will design models like podcasts or blogs, tailored for any job, driving his push for Microsoft "superintelligence" and independent foundation models. As the "most important technology of our time," Suleyman aims to reduce reliance on partners like OpenAI. This could redefine the American Dream, once fueled by MBAs and law degrees, urging urgent preparation for AI's white-collar reckoning.

GitHub Fixes AI Flaw That Could Have Exposed Private Repository Tokens

 



A now-patched security weakness in GitHub Codespaces revealed how artificial intelligence tools embedded in developer environments can be manipulated to expose sensitive credentials. The issue, discovered by cloud security firm Orca Security and named RoguePilot, involved GitHub Copilot, the AI coding assistant integrated into Codespaces. The flaw was responsibly disclosed and later fixed by Microsoft, which owns GitHub.

According to researchers, the attack could begin with a malicious GitHub issue. An attacker could insert concealed instructions within the issue description, specifically crafted to influence Copilot rather than a human reader. When a developer launched a Codespace directly from that issue, Copilot automatically processed the issue text as contextual input. This created an opportunity for hidden instructions to silently control the AI agent operating within the development environment.

Security experts classify this method as indirect or passive prompt injection. In such attacks, harmful instructions are embedded inside content that a large language model later interprets. Because the model treats that content as legitimate context, it may generate unintended responses or perform actions aligned with the attacker’s objective.

Researchers also described RoguePilot as a form of AI-mediated supply chain attack. Instead of exploiting external software libraries, the attacker leverages the AI system integrated into the workflow. GitHub allows Codespaces to be launched from repositories, commits, pull requests, templates, and issues. The exposure occurred specifically when a Codespace was opened from an issue, since Copilot automatically received the issue description as part of its prompt.

The manipulation could be hidden using HTML comment tags, which are invisible in rendered content but still readable by automated systems. Within those hidden segments, an attacker could instruct Copilot to extract the repository’s GITHUB_TOKEN, a credential that provides elevated permissions. In one demonstrated scenario, Copilot could be influenced to check out a specially prepared pull request containing a symbolic link to an internal file. Through techniques such as referencing a remote JSON schema, the AI assistant could read that internal file and transmit the privileged token to an external server.

The RoguePilot disclosure comes amid broader concerns about AI model alignment. Separate research from Microsoft examined a reinforcement learning method called Group Relative Policy Optimization, or GRPO. While typically used to fine-tune large language models after deployment, researchers found it could also weaken safety safeguards, a process they labeled GRP-Obliteration. Notably, training on even a single mildly problematic prompt was enough to make multiple language models more permissive across harmful categories they had never explicitly encountered.

Additional findings stress upon side-channel risks tied to speculative decoding, an optimization technique that allows models to generate multiple candidate tokens simultaneously to improve speed. Researchers found this process could potentially reveal conversation topics or identify user queries with significant accuracy.

Further concerns were raised by AI security firm HiddenLayer, which documented a technique called ShadowLogic. When applied to agent-based systems, the concept evolves into Agentic ShadowLogic. This approach involves embedding backdoors at the computational graph level of a model, enabling silent modification of tool calls. An attacker could intercept and reroute requests through infrastructure under their control, monitor internal endpoints, and log data flows without disrupting normal user experience.

Meanwhile, Neural Trust demonstrated an image-based jailbreak method known as Semantic Chaining. This attack exploits limited reasoning depth in image-generation models by guiding them through a sequence of individually harmless edits that gradually produce restricted or offensive content. Because each step appears safe in isolation, safety systems may fail to detect the evolving harmful intent.

Researchers have also introduced the term Promptware to describe a new category of malicious inputs designed to function like malware. Instead of exploiting traditional code vulnerabilities, promptware manipulates large language models during inference to carry out stages of a cyberattack lifecycle, including reconnaissance, privilege escalation, persistence, command-and-control communication, lateral movement, and data exfiltration.

Collectively, these findings demonstrate that AI systems embedded in development platforms are becoming a new attack surface. As organizations increasingly rely on intelligent automation, safeguarding the interaction between user input, AI interpretation, and system permissions is critical to preventing misuse within trusted workflows.

Malicious Outlook Add-In Hijack Steals 4,000 Microsoft Credentials

 

A breach transformed the AgreeTo plug-in for Microsoft Outlook - once meant for organizing meetings - into a weapon that harvested over four thousand login details. Though built by a third-party developer and offered through the official Office Add-in Store starting in late 2022, it turned against its intended purpose. Instead of simplifying calendars, it funneled user data to attackers. What began as a practical tool ended up exploited, quietly capturing credentials under false trust. 

Not every tool inside Office apps runs locally - some pull data straight from web addresses. For AgreeTo, its feature lived online through a link managed via Vercel. That address stopped receiving updates when the creator walked away, even though people kept using it. With no one fixing issues, the software faded into silence. Yet Microsoft still displayed it as available for download. Later, someone with harmful intent took control of the unused webpage. From there, they served malicious material under the app’s trusted name. A login screen mimicking Microsoft’s design appeared where the real one should have been, according to analysts at Koi Security. 

Instead of authentic access points, users faced a counterfeit form built to harvest credentials. Hidden scripts ran alongside, silently sending captured data elsewhere. After approval in Microsoft’s marketplace, the add-in escaped further checks. The company examines just the manifest when apps are submitted - nothing beyond that gets verified later. Interface components and features load externally, pulled from servers run by developers themselves. 

Since AgreeTo passed initial review, its updated files came straight from machines now under malicious control. Oversight ended once publication was complete. From inside the attacker’s data pipeline, Koi Security found over 4,000 Microsoft login details already taken. Alongside these, information such as credit card records and responses to bank verification questions had also been collected. While analyzing activity, experts noticed live attempts using the breached logins unfolding in real time. 

Opening the harmful AgreeTo add-on in Outlook displayed a counterfeit Microsoft login screen within the sidebar rather than the expected calendar tool. Resembling an authentic authentication portal, this imitation proved hard to recognize as fraudulent. Once victims submitted their details, those credentials got sent through a Telegram bot interface. Following that transfer, individuals saw the genuine Microsoft sign-in page appear - helping mask what had just occurred. Despite keeping ReadWriteItem access, which enables viewing and editing messages, there's no proof the tool tampered with any emails. 

Behind the campaign, investigators spotted a single actor running several phishing setups aimed at financial services, online connectivity firms, and email systems. Notable because it lives inside Microsoft’s official store, AgreeTo stands apart from past threats that spread via spam, phishing, or malvertising. This marks the first time a verified piece of malware has appeared on the Microsoft Marketplace, according to Oren Yomtov at Koi. He also notes it is the initial harmful Outlook extension spotted actively used outside test environments. 

A removal of AgreeTo from the store was carried out by Microsoft. Anyone keeping the add-in should uninstall it without delay, followed by a password change. Attempts to reach Microsoft for input have been made; no reply came so far.

Experts Find Malicious Browser Extensions, Chrome, Safari, and Edge Affected


Threat actors exploit extensions

Cybersecurity experts found 17 extensions for Chrome, Edge, and Firefox browsers which track user's internet activity and install backdoors for access. The extensions were downloaded over 840,000 times. 

The campaign is not new. LayerX claimed that the campaign is part of GhostPoster, another campaign first found by Koi Security last year in December. Last year, researchers discovered 17 different extensions that were downloaded over 50,000 times and showed the same monitoring behaviour and deploying backdoors. 

Few extensions from the new batch were uploaded in 2020, exposing users to malware for years. The extensions appeared in places like the Edge store and later expanded to Firefox and Chrome. 

Few extensions stored malicious JavaScript code in the PNG logo. The code is a kind of instruction on downloading the main payload from a remote server. 

The main payload does multiple things. It can hijack affiliate links on famous e-commerce websites to steal money from content creators and influencers. “The malware watches for visits to major e-commerce platforms. When you click an affiliate link on Taobao or JD.com, the extension intercepts it. The original affiliate, whoever was supposed to earn a commission from your purchase, gets nothing. The malware operators get paid instead,” said Koi researchers. 

After that, it deploys Google Analytics tracking into every page that people open, and removes security headers from HTTP responses. 

In the end, it escapes CAPTCHA via three different ways, and deploy invisible iframes that do ad frauds, click frauds, and tracking. These iframes disappear after 15 seconds.

Besides this, all extensions were deleted from the repositories, but users shoul also remove them personally. 

This staged execution flow demonstrates a clear evolution toward longer dormancy, modularity, and resilience against both static and behavioral detection mechanisms,” said LayerX. 

The PNG steganography technique is employed by some. Some people download JavaScript directly and include it into each page you visit. Others employ bespoke ciphers to encode the C&C domains and use concealed eval() calls. The same assailant. identical servers. many methods of delivery. This appears to be testing several strategies to see which one gets the most installs, avoids detection the longest, and makes the most money.

This campaign reflects a deliberate shift toward patience and precision. By embedding malicious code in images, delaying execution, and rotating delivery techniques across identical infrastructure, the attackers test which methods evade detection longest. The strategy favors longevity and profit over speed, exposing how browser ecosystems remain vulnerable to quietly persistent threats.

Microsoft Unveils Backdoor Scanner for Open-Weight AI Models

 

Microsoft has introduced a new lightweight scanner designed to detect hidden backdoors in open‑weight large language models (LLMs), aiming to boost trust in artificial intelligence systems. The tool, built by the company’s AI Security team, focuses on subtle behavioral patterns inside models to reliably flag tampering without generating many false outcomes. By targeting how specific trigger inputs change a model’s internal operations, Microsoft hopes to offer security teams a practical way to vet AI models before deployment.

The scanner is meant to address a growing problem in AI security: model poisoning and backdoored models that act as “sleeper agents.” In such attacks, threat actors manipulate model weights or training data so the model behaves normally in most scenarios, but switches to malicious or unexpected behavior when it encounters a carefully crafted trigger phrase or pattern. Because these triggers are narrowly defined, the backdoor often evades normal testing and quality checks, making detection difficult. Microsoft notes that both the model’s parameters and its surrounding code can be tampered with, but this tool focuses primarily on backdoors embedded directly into the model’s weights.

To detect these covert modifications, Microsoft’s scanner looks for three practical signals that indicate a poisoned model. First, when given a trigger prompt, compromised models tend to show a distinctive “double triangle” attention pattern, focusing heavily on the trigger itself and sharply reducing the randomness of their output. Second, backdoored LLMs often leak fragments of their own poisoning data, including trigger phrases, through memorization rather than generalization. Third, a single hidden backdoor may respond not just to one exact phrase, but to multiple “fuzzy” variations of that trigger, which the scanner can surface during analysis.

The detection workflow starts by extracting memorized content from the model, then analyzing that content to isolate suspicious substrings that could represent hidden triggers. Microsoft formalizes the three identified signals as loss functions, scores each candidate substring, and returns a ranked list of likely trigger phrases that might activate a backdoor. A key advantage is that the scanner does not require retraining the model or prior knowledge of the specific backdoor behavior, and it can operate across common GPT‑style architectures at scale. This makes it suitable for organizations evaluating open‑weight models obtained from third parties or public repositories.

However, the company stresses that the scanner is not a complete solution to all backdoor risks. It requires direct access to model files, so it cannot be used on proprietary, fully hosted models. It is also optimized for trigger‑based backdoors that produce deterministic outputs, meaning more subtle or probabilistic attacks may still evade detection. Microsoft positions the tool as an important step toward deployable backdoor detection and calls for broader collaboration across the AI security community to refine defenses. In parallel, the firm is expanding its Secure Development Lifecycle to address AI‑specific threats like prompt injection and data poisoning, acknowledging that modern AI systems introduce many new entry points for malicious inputs.

Attackers Hijack Microsoft Email Accounts to Launch Phishing Campaign Against Energy Firms

 


Cybercriminals have compromised Microsoft email accounts belonging to organizations in the energy sector and used those trusted inboxes to distribute large volumes of phishing emails. In at least one confirmed incident, more than 600 malicious messages were sent from a single hijacked account.

Microsoft security researchers explained that the attackers did not rely on technical exploits or system vulnerabilities. Instead, they gained access by using legitimate login credentials that were likely stolen earlier through unknown means. This allowed them to sign in as real users, making the activity harder to detect.

The attack began with emails that appeared routine and business-related. These messages included Microsoft SharePoint links and subject lines suggesting formal documents, such as proposals or confidentiality agreements. To view the files, recipients were asked to authenticate their accounts.

When users clicked the SharePoint link, they were redirected to a fraudulent website designed to look legitimate. The site prompted them to enter their Microsoft login details. By doing so, victims unknowingly handed over valid usernames and passwords to the attackers.

After collecting credentials, the attackers accessed the compromised email accounts from different IP addresses. They then created inbox rules that automatically deleted incoming emails and marked messages as read. This step helped conceal the intrusion and prevented account owners from noticing unusual activity.

Using these compromised inboxes, the attackers launched a second wave of phishing emails. These messages were sent not only to external contacts but also to colleagues and internal distribution lists. Recipients were selected based on recent email conversations found in the victim’s inbox, increasing the likelihood that the messages would appear trustworthy.

In this campaign, the attackers actively monitored inbox responses. They removed automated replies such as out-of-office messages and undeliverable notices. They also read replies from recipients and responded to questions about the legitimacy of the emails. All such exchanges were later deleted to erase evidence.

Any employee within an energy organization who interacted with the malicious links was also targeted for credential theft, allowing the attackers to expand their access further.

Microsoft confirmed that the activity began in January and described it as a short-duration, multi-stage phishing operation that was quickly disrupted. The company did not disclose how many organizations were affected, identify the attackers, or confirm whether the campaign is still active.

Security experts warn that simply resetting passwords may not be enough in these attacks. Because attackers can interfere with multi-factor authentication settings, they may maintain access even after credentials are changed. For example, attackers can register their own device to receive one-time authentication codes.

Despite these risks, multi-factor authentication remains a critical defense against account compromise. Microsoft also recommends using conditional access controls that assess login attempts based on factors such as location, device health, and user role. Suspicious sign-ins can then be blocked automatically.

Additional protection can be achieved by deploying anti-phishing solutions that scan emails and websites for malicious activity. These measures, combined with user awareness, are essential as attackers increasingly rely on stolen identities rather than software flaws.


Microsoft Introduces Hardware-Accelerated BitLocker to Boost Windows 11 Security and Performance

 

Microsoft is updating Windows 11 with hardware-accelerated BitLocker to improve both data security and system performance. The change enhances full-disk encryption by shifting cryptographic work from the CPU to dedicated hardware components within modern processors, helping systems run more efficiently while keeping data protected. 

BitLocker is Windows’ built-in encryption feature that prevents unauthorized access to stored data. During startup, it uses the Trusted Platform Module to manage encryption keys and unlock drives after verifying system integrity. While this method has been effective, Microsoft says faster storage technologies have made the performance impact of software-based encryption more noticeable, especially during demanding tasks. 

As storage speeds increase, BitLocker’s encryption overhead can slow down activities like gaming and video editing. To address this, Microsoft is offloading encryption tasks to specialized hardware within the processor that is designed for secure and high-speed cryptographic operations. This reduces reliance on the CPU and improves overall system responsiveness. 

With hardware acceleration enabled, large encryption workloads no longer heavily tax the CPU. Microsoft reports that testing showed about 70% fewer CPU cycles per input-output operation compared to software-based BitLocker, although actual gains depend on hardware configurations. 

On supported devices with NVMe drives and compatible processors, BitLocker will default to hardware-accelerated encryption using the XTS-AES-256 algorithm. This applies to automatic device encryption, manual activation, policy-based deployment, and script-driven setups, with some exceptions. 

The update also strengthens security by keeping encryption keys protected within hardware, reducing exposure to memory or CPU-based attacks. Combined with TPM protections, this moves BitLocker closer to eliminating key handling in general system memory.  

Hardware-accelerated BitLocker is available in Windows 11 version 24H2 with September updates installed and will also be included in version 25H2. Initial support is limited to Intel vPro systems with Intel Core Ultra Series 3 (Panther Lake) processors, with broader system-on-a-chip support planned. 

Users can confirm whether hardware acceleration is active by running the “manage-bde -status” command. Microsoft notes BitLocker will revert to software encryption if unsupported algorithms or key sizes are used, certain enterprise policies apply, or FIPS mode is enabled on hardware without certified cryptographic offloading.

2FA Fail: Hackers Exploit Microsoft 365 to Launch Code Phishing Attacks


Two-factor authentication (2FA) has been one of the most secure ways to protect online accounts. It requires a secondary code besides a password. However, in recent times, 2FA has not been a reliable method anymore, as hackers have started exploiting it easily. 

Experts advise users to use passkeys instead of 2FA these days, as they are more secure and less prone to hack attempts. Recent reports have shown that 2FA as a security method is undermined. 

Russian-linked state sponsored threat actors are now abusing flaws in Microsoft’s 365. Experts from Proofpoint have noticed a surge in Microsoft 365 account takeover cyberattacks, threat actors are exploiting authentication code phishing to compromise Microsoft’s device authorization flow.

They are also launching advanced phishing campaigns that escape 2FA and hack sensitive accounts. 

About the attack

The recent series of cyberattacks use device code phishing where hackers lure victims into giving their authentication codes on fake websites that look real. When the code is entered, hackers gain entry to the victim's Microsoft 365 account, escaping the safety of 2FA. 

The campaigns started in early 2025. In the beginning, hackers relied primarily on code phishing. By March, they increased their tactics to exploit Oauth authentication workflows, which are largely used for signing into apps and services. The development shows how fast threat actors adapt when security experts find their tricks.

Who is the victim? 

The attacks are particularly targeted against high-value sectors that include:

Universities and research institutes 

Defense contractors

Energy providers

Government agencies 

Telecommunication companies 

By targeting these sectors, hackers increase the impact of their attacks for purposes such as disruption, espionage, and financial motives. 

The impact 

The surge in 2FA code attacks exposes a major gap, no security measure is foolproof. While 2FA is still far stronger than relying on passwords alone, it can be undermined if users are deceived into handing over their codes. This is not a failure of the technology itself, but of human trust and awareness.  

A single compromised account can expose sensitive emails, documents, and internal systems. Users are at risk of losing their personal data, financial information, and even identity in these cases.

How to Stay Safe

Verify URLs carefully. Never enter authentication codes on unfamiliar or suspicious websites.  

Use phishing-resistant authentication. Hardware security keys (like YubiKeys) or biometric logins are harder to trick.  

Enable conditional access policies. Organizations can restrict logins based on location, device, or risk level.  

Monitor OAuth activity. Be cautious of unexpected consent requests from apps or services.  

Educate users. Awareness training is often the most effective defense against social engineering.  


Amazon and Microsoft AI Investments Put India at a Crossroads

 

Major technology companies Amazon and Microsoft have announced combined investments exceeding $50 billion in India, placing artificial intelligence firmly at the center of global attention on the country’s technology ambitions. Microsoft chief executive Satya Nadella revealed the company’s largest-ever investment in Asia, committing $17.5 billion to support infrastructure development, workforce skills, and what he described as India’s transition toward an AI-first economy. Shortly after, Amazon said it plans to invest more than $35 billion in India by 2030, with part of that funding expected to strengthen its artificial intelligence capabilities in the country. 

These announcements arrive at a time of heightened debate around artificial intelligence valuations globally. As concerns about a potential AI-driven market bubble have grown, some financial institutions have taken a contrarian view on India’s position. Analysts at Jefferies described Indian equities as a “reverse AI trade,” suggesting the market could outperform if global enthusiasm for AI weakens. HSBC has echoed similar views, arguing that Indian stocks offer diversification for investors wary of overheated technology markets elsewhere. This perspective has gained traction as Indian equities have underperformed regional peers over the past year, while foreign capital has flowed heavily into AI-centric companies in South Korea and Taiwan. 

Against this backdrop, the scale of Amazon and Microsoft’s commitments offers a significant boost to confidence. However, questions remain about how competitive India truly is in the global AI race. Adoption of artificial intelligence across the country has accelerated, with increasing investment in data centers and early movement toward domestic chip manufacturing. A recent collaboration between Intel and Tata Electronics to produce semiconductors locally reflects growing momentum in strengthening AI infrastructure. 

Despite these advances, India continues to lag behind global leaders when it comes to building sovereign AI models. The government launched a national AI mission aimed at supporting researchers and startups with high-performance computing resources to develop a large multilingual model. While officials say a sovereign model supporting more than 22 languages is close to launch, global competitors such as OpenAI and China-based firms have continued to release more advanced systems in the interim. India’s public investment in this effort remains modest when compared with the far larger AI spending programs seen in countries like France and Saudi Arabia. 

Structural challenges also persist. Limited access to advanced semiconductors, fragmented data ecosystems, and insufficient long-term research investment constrain progress. Although India has a higher-than-average concentration of AI-skilled professionals, retaining top talent remains difficult as global mobility draws developers overseas. Experts argue that policy incentives will be critical if India hopes to convert its talent advantage into sustained leadership. 

Even so, international studies suggest India performs strongly relative to its economic stage. The country ranks among the top five globally for new AI startups receiving investment and contributes a significant share of global AI research publications. While funding volumes remain far below those of the United States and China, experts believe India’s advantage may lie in applying AI to real-world problems rather than competing directly in foundational model development. 

AI-driven applications addressing agriculture, education, and healthcare are already gaining traction, demonstrating the technology’s potential impact at scale. At the same time, analysts warn that artificial intelligence could disrupt India’s IT services sector, a long-standing engine of economic growth. Slowing hiring, wage pressure, and weaker stock performance indicate that this transition is already underway, underscoring both the opportunity and the risk embedded in India’s AI future.

December Patch Tuesday Brings Critical Microsoft, Notepad++, Fortinet, and Ivanti Security Fixes

 


While December's Patch Tuesday gave us a lighter release than normal, it arrived with several urgent vulnerabilities that need attention immediately. In all, Microsoft released 57 CVE patches to finish out 2025, including one flaw already under active exploitation and two others that were publicly disclosed. Notably, critical security updates also came from Notepad++, Ivanti, and Fortinet this cycle, making it particularly important for system administrators and enterprise security teams alike. 

The most critical of Microsoft's disclosures this month is CVE-2025-62221, a Windows Cloud Files Mini Filter Driver bug rated 7.8 on the CVSS scale. It allows for privilege escalation: an attacker who has code execution rights can leverage the bug to escalate to full system-level access. Researchers say this kind of bug is exploited on a regular basis in real-world intrusions, and "patching ASAP" is critical. Microsoft hasn't disclosed yet which threat actors are actively exploiting this flaw; however, experts explain that bugs like these "tend to pop up in almost every big compromise and are often used as stepping stones to further breach". 

Another two disclosures from Microsoft were CVE-2025-54100 in PowerShell and CVE-2025-64671, impacting GitHub Copilot for JetBrains. Although these are not confirmed to be exploited, they were publicly disclosed ahead of patching. Graded at 8.4, the Copilot vulnerability would have allowed for remote code execution via malicious cross-prompt injection, provided a user is tricked into opening untrusted files or connecting to compromised servers. Security researchers expect more vulnerabilities of this type to emerge as AI-integrated development tools expand in usage. 

But one of the more ominous developments outside Microsoft belongs to Notepad++. The popular open-source editor pushed out version 8.8.9 to patch a weakness in the way updates were checked for authenticity. Attackers were managing to intercept network traffic from the WinGUp update client, then redirecting users to rogue servers, where malicious files were downloaded instead of legitimate updates. There are reports that threat groups in China were actively testing and exploiting this vulnerability. Indeed, according to the maintainer, "Due to the improper update integrity validation, an adversary was able to manipulate the download"; therefore, users should upgrade as soon as possible. 

Fortinet also patched two critical authentication bypass vulnerabilities, CVE-2025-59718 and CVE-2025-59719, in FortiOS and several related products. The bugs enable hackers to bypass FortiCloud SSO authentication using crafted SAML messages, which only works if SSO has been enabled. Administrators are advised to disable the feature until they can upgrade to patched builds to avoid unauthorized access. Rounding out the disclosures, Ivanti released a fix for CVE-2025-10573, a severe cross-site scripting vulnerability in its Endpoint Manager. The bug allows an attacker to register fake endpoints and inject malicious JavaScript into the administrator dashboard. Viewed, this could serve an attacker full control over the session without credentials. There has been no observed exploitation so far, but researchers warn that it is likely attackers will reverse engineer the fix soon, making for a deployment environment of haste.

End to End-to-end Encryption? Google Update Allows Firms to Read Employee Texts


Your organization can now read your texts

Microsoft stirred controversy when it revealed a Teams update that could tell your organization when you're not at work. Google did the same. Say goodbye to end-to-end encryption. With this new RCS and SMS Android update, your RCS and SMS texts are no longer private. 

According to Android Authority, "Google is rolling out Android RCS Archival on Pixel (and other Android) phones, allowing employers to intercept and archive RCS chats on work-managed devices. In simpler terms, your employer will now be able to read your RCS chats in Google Messages despite end-to-end encryption.”

Only for organizational devices 

This is only applicable to work-managed devices and doesn't impact personal devices. In regulated industries, it will only add RCS archiving to existing SMS archiving. In an organization, however, texting is different than emailing. In the former, employees sometimes share about their non-work life. End-to-end encryptions keep these conversations safe, but this will no longer be the case.

The end-to-end question 

There is alot of misunderstanding around end-to-end encryption. It protects messages when they are being sent, but once they are on your device, they are decrypted and no longer safe. 

According to Google, this is "a dependable, Android-supported solution for message archival, which is also backwards compatible with SMS and MMS messages as well. Employees will see a clear notification on their device whenever the archival feature is active.”

What will change?

With this update, getting a phone at work is no longer as good as it seems. Employees have always been insecure about the risks in over-sharing on email, as it is easy to spy. But not texts. 

The update will make things different. According to Google, “this new capability, available on Google Pixel and other compatible Android Enterprise devices gives your employees all the benefits of RCS — like typing indicators, read receipts, and end-to-end encryption between Android devices — while ensuring your organization meets its regulatory requirements.”

Promoting organizational surveillance 

Because of organizational surveillance, employees at times turn to shadow IT systems such as Whatsapp and Signal to communicate with colleagues. The new Google update will only make things worse. 

“Earlier,” Google said, ““employers had to block the use of RCS entirely to meet these compliance requirements; this update simply allows organizations to support modern messaging — giving employees messaging benefits like high-quality media sharing and typing indicators — while maintaining the same compliance standards that already apply to SMS messaging."

Microsoft Quietly Changes Windows Shortcut Handling After Dangerous Zero-day Abuse

 



Microsoft has changed how Windows displays information inside shortcut files after researchers confirmed that multiple hacking groups were exploiting a long-standing weakness in Windows Shell Link (.lnk) files to spread malware in real attacks.

The vulnerability, CVE-2025-9491, pertains to how Windows accesses and displays the "Target" field of a shortcut file. The attackers found that they could fill the Target field with big sets of blank spaces, followed by malicious commands. When a user looks at a file's properties, Windows only displays the first part of that field. The malicious command remains hidden behind whitespace, making the shortcut seem innocuous.

These types of shortcuts are usually distributed inside ZIP folders or other similar archives, since many email services block .lnk files outright. The attack relies on persuasion: Victims must willingly open the shortcut for the malware to gain an entry point on the system. When opened, the hidden command can install additional tools or create persistence.


Active Exploitation by Multiple Threat Groups

Trend Micro researchers documented in early 2025 that this trick was already being used broadly. Several state-backed groups and financially motivated actors had adopted the method to deliver a range of malware families, from remote access trojans to banking trojans. Later, Arctic Wolf Labs also observed attempts to use the same technique against diplomats in parts of Europe, where attackers used the disguised shortcut files to drop remote access malware.

The campaigns followed a familiar pattern. Victims received a compressed folder containing what looked like a legitimate document or utility. Inside sat a shortcut that looked ordinary but actually executed a concealed command once it was opened.


Microsoft introduces a quiet mitigation

Although Microsoft first said the bug did not meet the criteria for out-of-band servicing because it required user interaction, the company nonetheless issued a silent fix via standard Windows patching. With the patches in place, Windows now displays the full Target field in a shortcut's properties window instead of truncating the display after about 260 characters.

This adjustment does not automatically remove malicious arguments inside a shortcut, nor does it pop up with a special warning when an unusually long command is present. It merely provides full visibility to users, which may make suspicious content more easily identifiable for the more cautious users.

When questioned about the reason for the change, Microsoft repeated its long-held guidance: users shouldn't open files from unknown sources and should pay attention to its built-in security warnings.


Independent patch offers stricter safeguards

Because Microsoft's update is more a matter of visibility than enforcement, ACROS Security has issued an unofficial micropatch via its 0patch service. The update its team released limits the length of Target fields and pops up a warning before allowing a potentially suspicious shortcut to open. This more strict treatment, according to the group, would block the vast majority of malicious shortcuts seen in the wild.

This unofficial patch is now available to 0patch customers using various versions of Windows, including editions that are no longer officially supported.


How users can protect themselves

Users and organizations can minimize the risk by refraining from taking shortcuts coming from unfamiliar sources, especially those that are wrapped inside compressed folders. Security teams are encouraged to ensure Windows systems are fully updated, apply endpoint protection tools, and treat unsolicited attachments with care. Training users to inspect file properties and avoid launching unexpected shortcut files is also a top priority.

However, as the exploitation of CVE-2025-9491 continues to manifest in targeted attacks, the updated Windows behavior, user awareness, and security controls are layered together for the best defense for now. 

Hackers Use Look-Alike Domain Trick to Imitate Microsoft and Capture User Credentials

 




A new phishing operation is misleading users through an extremely subtle visual technique that alters the appearance of Microsoft’s domain name. Attackers have registered the look-alike address “rnicrosoft(.)com,” which replaces the single letter m with the characters r and n positioned closely together. The small difference is enough to trick many people into believing they are interacting with the legitimate site.

This method is a form of typosquatting where criminals depend on how modern screens display text. Email clients and browsers often place r and n so closely that the pair resembles an m, leading the human eye to automatically correct the mistake. The result is a domain that appears trustworthy at first glance although it has no association with the actual company.

Experts note that phishing messages built around this tactic often copy Microsoft’s familiar presentation style. Everything from symbols to formatting is imitated to encourage users to act without closely checking the URL. The campaign takes advantage of predictable reading patterns where the brain prioritizes recognition over detail, particularly when the user is scanning quickly.

The deception becomes stronger on mobile screens. Limited display space can hide the entire web address and the address bar may shorten or disguise the domain. Criminals use this opportunity to push malicious links, deliver invoices that look genuine, or impersonate internal departments such as HR teams. Once a victim believes the message is legitimate, they are more likely to follow the link or download a harmful attachment.

The “rn” substitution is only one example of a broader pattern. Typosquatting groups also replace the letter o with the number zero, add hyphens to create official-sounding variations, or register sites with different top level domains that resemble the original brand. All of these are intended to mislead users into entering passwords or sending sensitive information.

Security specialists advise users to verify every unexpected message before interacting with it. Expanding the full sender address exposes inconsistencies that the display name may hide. Checking links by hovering over them, or using long-press previews on mobile devices, can reveal whether the destination is legitimate. Reviewing email headers, especially the Reply-To field, can also uncover signs that responses are being redirected to an external mailbox controlled by attackers.

When an email claims that a password reset or account change is required, the safest approach is to ignore the provided link. Instead, users should manually open a new browser tab and visit the official website. Organisations are encouraged to conduct repeated security awareness exercises so employees do not react instinctively to familiar-looking alerts.


Below are common variations used in these attacks:

Letter Pairing: r and n are combined to imitate m as seen in rnicrosoft(.)com.

Number Replacement: the letter o is switched with the number zero in addresses like micros0ft(.)com.

Added Hyphens: attackers introduce hyphens to create domains that appear official, such as microsoft-support(.)com.

Domain Substitution: similar names are created by altering only the top level domain, for example microsoft(.)co.


This phishing strategy succeeds because it relies on human perception rather than technical flaws. Recognising these small changes and adopting consistent verification habits remain the most effective protections against such attacks.



Aisuru Botnet Launches 15.72 Tbps DDoS Attack on Microsoft Azure Network

 

Microsoft has reported that its Azure platform recently experienced one of the largest distributed denial-of-service attacks recorded to date, attributed to the fast-growing Aisuru botnet. According to the company, the attack reached a staggering peak of 15.72 terabits per second and originated from more than 500,000 distinct IP addresses across multiple regions. The traffic surge consisted primarily of high-volume UDP floods and was directed toward a single public-facing Azure IP address located in Australia. At its height, the attack generated nearly 3.64 billion packets per second. 

Microsoft said the activity was linked to Aisuru, a botnet categorized in the same threat class as the well-known Turbo Mirai malware family. Like Mirai, Aisuru spreads by compromising vulnerable Internet of Things (IoT) hardware, including home routers and cameras, particularly those operating on residential internet service providers in the United States and additional countries. Azure Security senior product marketing manager Sean Whalen noted that the attack displayed limited source spoofing and used randomized ports, which ultimately made network tracing and provider-level mitigation more manageable. 

The same botnet has been connected to other record-setting cyber incidents in recent months. Cloudflare previously associated Aisuru with an attack that measured 22.2 Tbps and generated over 10.6 billion packets per second in September 2025, one of the highest traffic bursts observed in a short-duration DDoS event. Despite lasting only 40 seconds, that incident was comparable in bandwidth consumption to more than one million simultaneous 4K video streams. 

Within the same timeframe, researchers from Qi’anxin’s XLab division attributed another 11.5 Tbps attack to Aisuru and estimated the botnet was using around 300,000 infected devices. XLab’s reporting indicates rapid expansion earlier in 2025 after attackers compromised a TotoLink router firmware distribution server, resulting in the infection of approximately 100,000 additional devices. 

Industry reporting also suggests the botnet has targeted vulnerabilities in consumer equipment produced by major vendors, including D-Link, Linksys, Realtek-based systems, Zyxel hardware, and network equipment distributed through T-Mobile. 

The botnet’s growing presence has begun influencing unrelated systems such as DNS ranking services. Cybersecurity journalist Brian Krebs reported that Cloudflare removed several Aisuru-controlled domains from public ranking dashboards after they began appearing higher than widely used legitimate platforms. Cloudflare leadership confirmed that intentional traffic manipulation distorted ranking visibility, prompting new internal policies to suppress suspected malicious domain patterns. 

Cloudflare disclosed earlier this year that DDoS attacks across its network surged dramatically. The company recorded a 198% quarter-to-quarter rise and a 358% year-over-year increase, with more than 21.3 million attempted attacks against customers during 2024 and an additional 6.6 million incidents directed specifically at its own services during an extended multi-vector campaign.

Microsoft Teams’ New Location-Based Status Sparks Major Privacy and Legal Concerns

 

Microsoft Teams is preparing to roll out a new feature that could significantly change how employee presence is tracked in the workplace. By the end of the year, the platform will be able to automatically detect when an employee connects to the company’s office Wi-Fi and update their status to show they are working on-site. This information will be visible to both colleagues and supervisors, raising immediate questions about privacy and legality. Although Microsoft states that the feature will be switched off by default, IT administrators can enable it at the organizational level to improve “transparency and collaboration.” 

The idea appears practical on the surface. Remote workers may want to know whether coworkers are physically present at the office to access documents or coordinate tasks that require on-site resources. However, the convenience quickly gives way to concerns about surveillance. Critics warn that this feature could easily be misused to monitor employee attendance or indirectly enforce return-to-office mandates—especially as Microsoft itself is requiring employees living within 50 miles of its offices to spend at least three days a week on-site starting next February. 

To better understand the implications, TECHBOOK consulted Professor Christian Solmecke, a specialist in media and IT law. He argues that the feature rests on uncertain legal footing under European privacy regulations. According to Solmecke, automatically updating an employee’s location constitutes the processing of personal data, which is allowed under the GDPR only when supported by a valid legal basis. In this case, two possibilities exist: explicit employee consent or a legitimate interest on the part of the employer. But as Solmecke explains, an employer’s interest in transparency rarely outweighs an employee’s right to privacy, especially when tracking is not strictly necessary for job performance. 

The expert compares the situation to covert video surveillance, which is only permitted when there is a concrete suspicion of wrongdoing. Location tracking, if used to verify whether workers are actually on-site, falls into a similar category. For routine operations, he stresses, such monitoring would likely be disproportionate. Solmecke adds that neither broad IT policies nor standard employment contracts provide sufficient grounds for processing this type of data. Consent must be truly voluntary, which is difficult to guarantee in an employer-employee relationship where workers may feel pressured to agree. 

He states that if companies wish to enable this automatic location sharing, a dedicated written agreement would be required—one that employees can decline without negative repercussions. Additionally, in workplaces with a works council, co-determination rules apply. Under Germany’s Works Constitution Act, systems capable of monitoring performance or behavior must be approved by the works council before being implemented. Without such approval or a corresponding works agreement, enabling the feature would violate privacy law. 

For employees, the upcoming rollout does not mean their on-site presence will immediately become visible. Microsoft cannot allow employers to activate such a feature without clear employee knowledge or consent. According to Solmecke, any attempt to automatically log and share employee location inside the company would be legally vulnerable and potentially challengeable. Workers retain the right to reject such data collection unless a lawful framework is in place. 

As companies continue navigating hybrid and remote work models, Microsoft’s new location-based status illustrates the growing tension between workplace efficiency and digital privacy. Whether organizations adopt this feature will likely depend on how well they balance those priorities—and whether they can do so within the boundaries of data protection law.

Tech Giants Pour Billions Into AI Race for Market Dominance

 

Tech giants are intensifying their investments in artificial intelligence, fueling an industry boom that has driven stock markets to unprecedented heights. Fresh earnings reports from Meta, Alphabet, and Microsoft underscore the immense sums being poured into AI infrastructure—from data centers to advanced chips—despite lingering doubts about the speed of returns.

Meta announced that its 2025 capital expenditures will range between $70 billion and $72 billion, slightly higher than its earlier forecast. The company also revealed plans for substantially larger spending growth in 2026 as it seeks to compete more aggressively with players like OpenAI.

During a call with analysts, CEO Mark Zuckerberg defended Meta’s aggressive investment strategy, emphasizing AI’s transformative potential in driving both new product development and enhancing its core advertising business. He described the firm’s infrastructure as operating in a “compute-starved” state and argued that accelerating spending was essential to unlocking future growth.

Alphabet, parent to Google and YouTube, also raised its annual capital spending outlook to between $91 billion and $93 billion—up from $85 billion earlier this year. This nearly doubles what the company spent in 2024 and highlights its determination to stay at the forefront of large-scale AI development.

Microsoft’s quarterly report similarly showcased its expanding investment efforts. The company disclosed $34.9 billion in capital expenditures through September 30, surpassing analyst expectations and climbing from $24 billion in the previous quarter. CEO Satya Nadella said Microsoft continues to ramp up AI spending in both infrastructure and talent to seize what he called a “massive opportunity.” He noted that Azure and the company’s broader portfolio of AI tools are already having tangible real-world effects.

Investor enthusiasm surrounding these bold AI commitments has helped lift the share prices of all three firms above the broader S&P 500 index. Still, Wall Street remains keenly interested in seeing whether these heavy capital outlays will translate into measurable profits.

Bank of America senior economist Aditya Bhave observed that robust consumer activity and AI-driven business investment have been the key pillars supporting U.S. economic resilience. As long as the latter remains strong, he said, it signals continued GDP growth. Despite an 83 percent profit drop for Meta due to a one-time tax charge, Microsoft and Alphabet reported profit increases of 12 percent and 33 percent, respectively.

Microsoft Warns Windows 10 Users: Hackers Target Outdated Systems

Microsoft Warns Windows 10 Users: Hackers Target Outdated Systems

Modern cyberattacks rarely target the royal jewels.  Instead, they look for flaws in the systems that control the keys, such as obsolete operating systems, aging infrastructure, and unsupported endpoints.  For technical decision makers (TDMs), these blind spots are more than just an IT inconvenience.  They pose significant hazards to data security, compliance, and enterprise control.

Dangers of outdated windows 10

With the end of support for Windows 10 approaching, many businesses are asking themselves how many of their devices, servers, or endpoints are already (or will soon be) unsupported.  More importantly, what hidden weaknesses does this introduce into compliance, auditability, and access governance?

Most IT leaders understand the urge to keep outdated systems running for a little longer, patch what they can, and get the most value out of the existing infrastructure.

Importance of system updates

However, without regular upgrades, endpoint security technologies lose their effectiveness, audit trails become more difficult to maintain, and compliance reporting becomes a game of guesswork. 

Research confirms the magnitude of the problem.  According to Microsoft's newest Digital Defense Report, more than 90% of ransomware assaults that reach the encryption stage originate on unmanaged devices that lack sufficient security controls.  

Unsupported systems frequently fall into this category, making them ideal candidates for exploitation.  Furthermore, because these vulnerabilities exist at the infrastructure level rather than in individual files, they are frequently undetectable until an incident happens.

Attack tactic

Hackers don't have to break your defense. They just need to wait for you to leave a window open. With the end of support for Windows 10 approaching, hackers are already predicting that many businesses will fall behind. 

Waiting carries a high cost. Breaches on unsupported infrastructure can result in higher cleanup costs, longer downtime, and greater reputational harm than attacks on supported systems. Because compliance frameworks evolve quicker than legacy systems, staying put risks falling behind on standards that influence contracts, customer trust, and potentially your ability to do business.

What next?

Although unsupported systems may appear to be small technical defects, they quickly escalate into enterprise-level threats. The longer they remain in play, the larger the gap they create in endpoint security, compliance, and overall data security. Addressing even one unsupported system now can drastically reduce risk and give IT management more piece of mind. 

TDMs have a clear choice: modernize proactively or leave the door open for the next assault.

Microsoft’s Copilot Actions in Windows 11 Sparks Privacy and Security Concerns

When it comes to computer security, every decision ultimately depends on trust. Users constantly weigh whether to download unfamiliar software, share personal details online, or trust that their emails reach the intended recipient securely. Now, with Microsoft’s latest feature in Windows 11, that question extends further — should users trust an AI assistant to access their files and perform actions across their apps? 


Microsoft’s new Copilot Actions feature introduces a significant shift in how users interact with AI on their PCs. The company describes it as an AI agent capable of completing tasks by interacting with your apps and files — using reasoning, vision, and automation to click, type, and scroll just like a human. This turns the traditional digital assistant into an active AI collaborator, capable of managing documents, organizing folders, booking tickets, or sending emails once user permission is granted.  

However, giving an AI that level of control raises serious privacy and security questions. Granting access to personal files and allowing it to act on behalf of a user requires substantial confidence in Microsoft’s safeguards. The company seems aware of the potential risks and has built multiple protective layers to address them. 

The feature is currently available only in experimental mode through the Windows Insider Program for pre-release users. It remains disabled by default until manually turned on from Settings > System > AI components > Agent tools by activating the “Experimental agentic features” option. 

To maintain strict oversight, only digitally signed agents from trusted sources can integrate with Windows. This allows Microsoft to revoke or block malicious agents if needed. Furthermore, Copilot Actions operates within a separate standard account created when the feature is enabled. By default, the AI can only access known folders such as Documents, Downloads, Desktop, and Pictures, and requires explicit user permission to reach other locations. 

These interactions occur inside a controlled Agent workspace, isolated from the user’s desktop, much like Windows Sandbox. According to Dana Huang, Corporate Vice President of Windows Security, each AI agent begins with limited permissions, gains access only to explicitly approved resources, and cannot modify the system without user consent. 

Adding to this, Microsoft’s Peter Waxman confirmed in an interview that the company’s security team is actively “red-teaming” the feature — conducting simulated attacks to identify vulnerabilities. While he did not disclose test details, Microsoft noted that more granular privacy and security controls will roll out during the experimental phase before the feature’s public release. 

Even with these assurances, skepticism remains. The security research community — known for its vigilance and caution — will undoubtedly test whether Microsoft’s new agentic AI model can truly deliver on its promise of safety and transparency. As the preview continues, users and experts alike will be watching closely to see whether Copilot Actions earns their trust.

Windows 10 Support Termination Leaves Devices Vulnerable

 

Microsoft has officially ended support for Windows 10, marking a major shift impacting hundreds of millions of users worldwide. Released in 2015, the operating system will no longer receive free security updates, bug fixes, or technical assistance, leaving all devices running it vulnerable to exploitation. This decision mirrors previous end-of-life events such as Windows XP, which saw a surge in cyberattacks after losing support.

Rising security threats

Without updates, Windows 10 systems are expected to become prime targets for hackers. Thousands of vulnerabilities have already been documented in public databases like ExploitDB, and several critical flaws have been actively exploited. 

Among them are CVE-2025-29824, a “use-after-free” bug in the Common Log File System Driver with a CVSS score of 7.8; CVE-2025-24993, a heap-based buffer overflow in NTFS marked as “known exploited”; and CVE-2025-24984, leaking NTFS log data with the highest EPSS score of 13.87%. 

These vulnerabilities enable privilege escalation, code execution, or remote intrusion, many of which have been added to the U.S. CISA’s Known Exploited Vulnerabilities (KEV) catalog, signaling the seriousness of the risks.

Limited upgrade paths

Microsoft recommends that users migrate to Windows 11, which features modernized architecture and ongoing support. However, strict hardware requirements mean that roughly 200 million Windows 10 computers worldwide remain ineligible for the upgrade. 

For those unable to transition, Microsoft provides three main options: purchasing new hardware compatible with Windows 11, enrolling in a paid Extended Security Updates (ESU) program (offering patches for one extra year), or continuing to operate unsupported — a risky path exposing systems to severe cyber threats.

The support cutoff extends beyond the OS. Microsoft Office 2016 and 2019 have simultaneously reached end-of-life, leaving only newer versions like Office 2021 and LTSC operable but unsupported on Windows 10. Users are encouraged to switch to Microsoft 365 or move licenses to Windows 11 devices. Notably, support for Office LTSC 2021 ends in October 2026.

Data protection tips

Microsoft urges users to back up critical data and securely erase drives before recycling or reselling devices. Participating manufacturers and Microsoft itself offer trade-in or recycling programs to ensure data safety. As cyber risks amplify and hackers exploit obsolete systems, users still on Windows 10 face a critical choice — upgrade, pay for ESU, or risk exposure in an increasingly volatile digital landscape.