Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Italy Steps Up Cyber Defenses as Milano–Cortina Winter Olympics Approach

 



Inside a government building in Rome, located opposite the ancient Aurelian Walls, dozens of cybersecurity professionals have been carrying out continuous monitoring operations for nearly a year. Their work focuses on tracking suspicious discussions and coordination activity taking place across hidden corners of the internet, including underground criminal forums and dark web marketplaces. This monitoring effort forms a core part of Italy’s preparations to protect the Milano–Cortina Winter Olympic Games from cyberattacks.

The responsibility for securing the digital environment of the Games lies with Italy’s National Cybersecurity Agency, an institution formed in 2021 to centralize the country’s cyber defense strategy. The upcoming Winter Olympics represent the agency’s first large-scale international operational test. Officials view the event as a likely target for cyber threats because the Olympics attract intense global attention. Such visibility can draw a wide spectrum of malicious actors, ranging from small-scale cybercriminal groups seeking disruption or financial gain to advanced threat groups believed to have links with state interests. These actors may attempt to use the event as a platform to make political statements, associate attacks with ideological causes, or exploit broader geopolitical tensions.

The Milano–Cortina Winter Games will run from February 6 to February 22 and will be hosted across multiple Alpine regions for the first time in Olympic history. This multi-location format introduces additional security and coordination challenges. Each venue relies on interconnected digital systems, including communications networks, event management platforms, broadcasting infrastructure, and logistics systems. Securing a geographically distributed digital environment exponentially increases the complexity of monitoring, response coordination, and incident containment.

Officials estimate that the Games will reach approximately three billion viewers globally, alongside around 1.5 million ticket-holding spectators on site. This scale creates a vast digital footprint. High-visibility services, such as live streaming platforms, official event websites, and ticket purchasing systems, are considered particularly attractive targets. Disrupting these services can generate widespread media attention, cause public confusion, and undermine confidence in the organizers’ ability to safeguard critical digital operations.

Italy’s planning has been shaped by recent Olympic experience. During the 2024 Paris Summer Olympics, authorities recorded more than 140 cyber incidents. In 22 cases, attackers managed to gain access to information systems. While none of these incidents disrupted the competitions themselves, the sheer volume of hostile activity demonstrated the persistent pressure faced by host nations. On the day of the opening ceremony in Paris, France’s TGV high-speed rail network was also targeted in coordinated physical sabotage attacks involving explosive devices. This incident illustrated how large global events can attract both cyber threats and physical security risks at the same time.

Italian cybersecurity officials anticipate comparable levels of hostile activity during the Milano–Cortina Games, with an additional layer of complexity introduced by artificial intelligence. AI tools can be used by attackers to automate technical tasks, enhance reconnaissance, and support more convincing phishing and impersonation campaigns. These techniques can increase the speed and scale of cyber operations while making malicious activity harder to detect. Although authorities currently report no specific, elevated threat level, they acknowledge that the overall risk environment is becoming more complex due to the growing availability of AI-assisted tools.

The National Cybersecurity Agency’s defensive approach emphasizes early detection rather than reactive response. Analysts continuously monitor open websites, underground criminal communities, and social media channels to identify emerging threat patterns before they develop into direct intrusion attempts. This method is designed to provide early warning, allowing technical teams to strengthen defenses before attackers move from planning to execution.

Operational coordination will involve multiple teams. Around 20 specialists from the agency’s operational staff will focus exclusively on Olympic-related cyber intelligence from the headquarters in Rome. An additional 10 senior experts will be deployed to Milan starting on February 4 to support the Technology Operations Centre, which oversees the digital systems supporting the Games. These government teams will operate alongside nearly 100 specialists from Deloitte and approximately 300 personnel from the local organizing committee and technology partners. Together, these groups will manage cybersecurity monitoring, incident response, and system resilience across all Olympic venues.

If threats keep developing during the Games, the agency will continuously feed intelligence into technical operations teams to support rapid decision-making. The guiding objective remains consistent. Detect emerging risks early, interpret threat signals accurately, and respond quickly and effectively when specific dangers become visible. This approach reflects Italy’s broader strategy to protect the digital infrastructure that underpins one of the world’s most prominent international sporting events.


Open-Source AI Models Pose Growing Security Risks, Researchers Warn

Hackers and other criminals can easily hijack computers running open-source large language models and use them for illicit activity, bypassing the safeguards built into major artificial intelligence platforms, researchers said on Thursday. The findings are based on a 293-day study conducted jointly by SentinelOne and Censys, and shared exclusively with Reuters. 

The research examined thousands of publicly accessible deployments of open-source LLMs and highlighted a broad range of potentially abusive use cases. According to the researchers, compromised systems could be directed to generate spam, phishing content, or disinformation while evading the security controls enforced by large AI providers. 

The deployments were also linked to activity involving hacking, hate speech, harassment, violent or graphic content, personal data theft, scams, fraud, and in some cases, child sexual abuse material. While thousands of open-source LLM variants are available, a significant share of internet-accessible deployments were based on Meta’s Llama models, Google DeepMind’s Gemma, and other widely used systems, the researchers said. 

They identified hundreds of instances in which safety guardrails had been deliberately removed. “AI industry conversations about security controls are ignoring this kind of surplus capacity that is clearly being utilized for all kinds of different stuff, some of it legitimate, some obviously criminal,” said Juan Andres Guerrero-Saade, executive director for intelligence and security research at SentinelOne. He compared the problem to an iceberg that remains largely unaccounted for across the industry and the open-source community. 

The study focused on models deployed using Ollama, a tool that allows users to run their own versions of large language models. Researchers were able to observe system prompts in about a quarter of the deployments analyzed and found that 7.5 percent of those prompts could potentially enable harmful behavior. 

Geographically, around 30 per cent of the observed hosts were located in China, with about 20 per cent based in the United States, the researchers said. Rachel Adams, chief executive of the Global Centre on AI Governance, said responsibility for downstream misuse becomes shared once open models are released.  “Labs are not responsible for every downstream misuse, but they retain an important duty of care to anticipate foreseeable harms, document risks, and provide mitigation tooling and guidance,” Adams said.  

A Meta spokesperson declined to comment on developer responsibility for downstream abuse but pointed to the company’s Llama Protection tools and Responsible Use Guide. Microsoft AI Red Team Lead Ram Shankar Siva Kumar said Microsoft believes open-source models play an important role but acknowledged the risks. 

“We are clear-eyed that open models, like all transformative technologies, can be misused by adversaries if released without appropriate safeguards,” he said. 

Microsoft conducts pre-release evaluations and monitors for emerging misuse patterns, Kumar added, noting that “responsible open innovation requires shared commitment across creators, deployers, researchers, and security teams.” 

Ollama, Google and Anthropic did not comment. 

Iran-Linked Hackers Target Human Rights Groups in Redkitten Malware Campaign

A Farsi-speaking threat actor believed to be aligned with Iranian state interests is suspected of carrying out a new cyber campaign targeting non-governmental organizations and individuals documenting recent human rights abuses in Iran, according to a report by HarfangLab. 

The activity, tracked in January 2026 and codenamed RedKitten, appears to coincide with nationwide unrest that erupted in Iran in late 2025 over soaring inflation, rising food prices, and currency depreciation. The protests were followed by a severe security crackdown, mass casualties, and an internet blackout. 

“The malware relies on GitHub and Google Drive for configuration and modular payload retrieval, and uses Telegram for command-and-control,” HarfangLab said. 

Researchers said the campaign is notable for its apparent use of large language models to help develop and coordinate its tooling. The attack chain begins with a 7-Zip archive bearing a Farsi filename, which contains malicious Microsoft Excel files embedded with macros. 

The XLSM spreadsheets purport to list details of protesters who died in Tehran between Dec. 22, 2025, and Jan. 20, 2026. Instead, the files deploy a malicious VBA macro that acts as a dropper for a C# implant known as AppVStreamingUX_Multi_User.dll using a technique called AppDomainManager injection. HarfangLab said the VBA code itself shows signs of being generated by an LLM, citing its structure, variable naming patterns, and comments such as “PART 5: Report the result and schedule if successful.”  
Investigators believe the campaign exploits the emotional distress of people searching for information about missing or deceased protesters. Analysis of the spreadsheet data found inconsistencies such as mismatched ages and birthdates, suggesting the content was fabricated. The implanted backdoor, dubbed SloppyMIO, uses GitHub as a dead drop resolver to obtain Google Drive links hosting images that conceal configuration data using steganography. This data includes Telegram bot tokens, chat IDs, and links to additional modules. 

The malware supports multiple modules that allow attackers to run commands, collect and exfiltrate files, establish persistence through scheduled tasks, and launch processes on infected systems. “The malware can fetch and cache multiple modules from remote storage, run arbitrary commands, collect and exfiltrate files and deploy further malware with persistence via scheduled tasks,” HarfangLab said. “SloppyMIO beacons status messages, polls for commands and sends exfiltrated files over to a specified operator leveraging the Telegram Bot API for command-and-control.” 

Attribution to Iranian-linked actors is based on the use of Farsi-language artifacts, protest-themed lures, and tactical overlaps with earlier operations, including campaigns associated with Tortoiseshell, which previously used malicious Excel documents and AppDomainManager injection techniques. The use of GitHub as part of the command infrastructure mirrors earlier Iranian-linked operations. In 2022, Secureworks, now part of Sophos, documented a campaign by a sub-group of Nemesis Kitten that also leveraged GitHub to distribute malware. 

HarfangLab noted that reliance on widely used platforms such as GitHub, Google Drive, and Telegram complicates traditional infrastructure-based attribution but can also expose operational metadata that poses risks to the attackers themselves. The findings follow recent disclosures by U.K.-based Iranian activist and cyber investigator Nariman Gharib, who detailed a separate phishing campaign using a fake WhatsApp Web login page to hijack victims’ accounts. 

“The page polls the attacker’s server every second,” Gharib said. “This lets the attacker serve a live QR code from their own WhatsApp Web session directly to the victim.” That phishing infrastructure was also designed to request access to a victim’s camera, microphone, and location, effectively turning the page into a surveillance tool. The identity and motive of the operators behind that campaign remain unclear. 

Separately, TechCrunch reporter Zack Whittaker reported that related activity also targeted Gmail credentials using fake login pages, impacting around 50 victims across the Kurdish community, academia, government, and business sectors. The disclosures come amid growing scrutiny of Iranian-linked cyber groups following a major data leak affecting Charming Kitten, which exposed details about its operations and a surveillance platform known as Kashef. Gharib has also highlighted leaked records tied to Ravin Academy, a cybersecurity school linked to Iran’s Ministry of Intelligence and Security, which was sanctioned by the United States in 2022.

Open VSX Supply Chain Breach Delivers GlassWorm Malware Through Trusted Developer Extensions

 

Cybersecurity experts have uncovered a supply chain compromise targeting the Open VSX Registry, where unknown attackers abused a legitimate developer’s account to distribute malicious updates to unsuspecting users.

According to findings from Socket, the attackers infiltrated the publishing environment of a trusted extension author and used that access to release tainted versions of widely used tools.

"On January 30, 2026, four established Open VSX extensions published by the oorzc author had malicious versions published to Open VSX that embed the GlassWorm malware loader," Socket security researcher Kirill Boychenko said in a Saturday report.

The compromised extensions had long been considered safe and were positioned as genuine developer utilities, with some having been available for more than two years.

"These extensions had previously been presented as legitimate developer utilities (some first published more than two years ago) and collectively accumulated over 22,000 Open VSX downloads prior to the malicious releases."

Socket noted that the incident stemmed from unauthorized access to the developer’s publishing credentials. The Open VSX security team believes the breach may have involved a leaked access token or similar misuse of credentials. All affected versions have since been taken down from the registry.

Impacted extensions include:
  • FTP/SFTP/SSH Sync Tool (oorzc.ssh-tools — version 0.5.1)
  • I18n Tools (oorzc.i18n-tools-plus — version 1.6.8)
  • vscode mindmap (oorzc.mind-map — version 1.0.61)
  • scss to css (oorzc.scss-to-css-compile — version 1.3.4)
The malicious updates were engineered to deploy GlassWorm, a loader malware linked to an ongoing campaign. The loader decrypts and executes payloads at runtime and relies on EtherHiding—a technique that conceals command-and-control infrastructure—to retrieve C2 endpoints. Its ultimate objective is to siphon Apple macOS credentials and cryptocurrency wallet information.

Before activating, the malware profiles the infected system and checks locale settings, avoiding execution on systems associated with Russian regions, a behavior often seen in malware tied to Russian-speaking threat groups.

The stolen data spans a broad range of sensitive assets, including browser credentials, cryptocurrency wallets, iCloud Keychain data, Safari cookies, Apple Notes, user documents, VPN configurations, and developer secrets such as AWS and SSH credentials.

The exposure of developer-related data is particularly dangerous, as it can lead to deeper enterprise breaches, cloud account takeovers, and lateral movement across networks.

"The payload includes routines to locate and extract authentication material used in common workflows, including inspecting npm configuration for _authToken and referencing GitHub authentication artifacts, which can provide access to private repositories, CI secrets, and release automation," Boychenko said.

What sets this incident apart is the delivery method. Instead of relying on fake or lookalike extensions, the attackers leveraged a real developer’s account to push the malware—an evolution from earlier GlassWorm campaigns that depended on typosquatting and brand impersonation.

"The threat actor blends into normal developer workflows, hides execution behind encrypted, runtime-decrypted loaders, and uses Solana memos as a dynamic dead drop to rotate staging infrastructure without republishing extensions," Socket said. "These design choices reduce the value of static indicators and shift defender advantage toward behavioral detection and rapid response."

ShinyHunters Claims Match Group Data Breach Exposing 10 Million Records

 

A new data theft has surfaced linked to ShinyHunters, which now claims it stole more than 10 million user records from Match Group, the U.S. company behind several major swipe-based dating platforms. The group has positioned the incident as another major addition to its breach history, alleging that personal data and internal materials were taken without authorization. 

According to ShinyHunters, the stolen data relates to users of Hinge, Match.com, and OkCupid, along with hundreds of internal documents. The Register reported seeing a listing on the group’s dark web leak site stating that “over 10 million lines” of data were involved. The exposure was also linked to AppsFlyer, a marketing analytics provider, which was referenced as the likely source connected to the incident. 

Match Group confirmed it is investigating what it described as a recently identified security incident, and said some user data may have been accessed. The company stated it acted quickly to terminate the unauthorized access and is continuing its investigation with external cybersecurity experts. Match Group also said there was no indication that login credentials, financial information, or private communications were accessed, and added that it believes only a limited amount of user data was affected. 

It said notifications are being issued to impacted individuals where appropriate. However, Match Group did not disclose what categories of data were accessed, how many users were impacted, or whether any ransom demand was made or paid, leaving key details about the scope and motivation unresolved. Cybernews, which reviewed samples associated with the listing, reported that the dataset appears to include customer personal data, some employee-related information, and internal corporate documents. 

The analysis also suggested the presence of Hinge subscription details, including user IDs, transaction IDs, payment amounts, and records linked to blocked installations, along with IP addresses and location-related data. In a separate post published the same week, ShinyHunters also claimed it had stolen data from Bumble. The group uploaded what it described as 30 GB of compressed files allegedly sourced from Google Drive and Slack. The claims come shortly after researchers reported that ShinyHunters targeted around 100 organizations by abusing stolen Okta single sign-on credentials. The alleged victim list included well-known SaaS and technology firms such as Atlassian, AppLovin, Canva, Epic Games, Genesys, HubSpot, Iron Mountain, RingCentral, and ZoomInfo, among others. 

Bumble has issued a statement saying that one contractor’s account had been compromised in a phishing incident. The company said the account had limited privileges but was used for brief unauthorized access to a small portion of Bumble’s network. Bumble stated its security team detected and removed the access quickly, confirmed the incident was contained, engaged external cybersecurity experts, and notified law enforcement. Bumble also emphasized that there was no access to its member database, member accounts, the Bumble app, or member direct messages or profiles.

Apple's New Feature Will Help Users Restrict Location Data


Apple has introduced a new privacy feature that allows users to restrict the accuracy of location data shared with cellular networks on a few iPad models and iPhone. 

About the feature

The “Limit Precise Location” feature will start after updating to iOS26.3 or later. It restricts the information that mobile carriers use to decide locations through cell tower connections. Once turned on, cellular networks can only detect the device’s location, like neighbourhood instead of accurate street address. 

According to Apple, “The precise location setting doesn't impact the precision of the location data that is shared with emergency responders during an emergency call.” “This setting affects only the location data available to cellular networks. It doesn't impact the location data that you share with apps through Location Services. For example, it has no impact on sharing your location with friends and family with Find My.”

Users can turn on the feature by opening “Settings,” selecting “Cellular,” “Cellular Data Options,” and clicking the “Limit Precise Location” setting. After turning on limited precise location, the device may trigger a device restart to complete activation. 

The privacy enhancement feature works only on iPhone Air, iPad Pro (M5) Wi-Fi + Cellular variants running on iOS 26.3 or later. 

Where will it work?

The availability of this feature will depend on carrier support. The mobile networks compatible are:

EE and BT in the UK

Boost Mobile in the UK

Telecom in Germany 

AIS and True in Thailand 

Apple hasn't shared the reason for introducing this feature yet.

Compatibility of networks with the new feature 

Apple's new privacy feature, which is currently only supported by a small number of networks, is a significant step towards ensuring that carriers can only collect limited data on their customers' movements and habits because cellular networks can easily track device locations via tower connections for network operations.

“Cellular networks can determine your location based on which cell towers your device connects to. The limit precise location setting enhances your location privacy by reducing the precision of location data available to cellular networks,”

WhatsApp Launches High-Security Mode for Ultimate User Protection

 

WhatsApp has launched a new high-security mode called "Strict Account Settings," providing users with enhanced defenses against sophisticated cyber threats. This feature, introduced on January 27, 2026, allows one-click activation and builds on the platform's existing end-to-end encryption. It targets high-risk individuals like journalists and public figures facing advanced attacks, marking WhatsApp as the third major tech firm to offer such protections after Apple's Lockdown Mode and Google's Advanced Protection.

The mode activates multiple safeguards simultaneously through a simple toggle in WhatsApp settings under Privacy > Advanced. It blocks media files and attachments from unknown senders, preventing potential malware delivery via images or documents. Link previews—thumbnails that appear for shared URLs—are disabled to eliminate risks from embedded tracking or exploits, while calls from unknown numbers are automatically silenced, appearing only in missed calls.

These measures address common attack vectors identified in cyber surveillance campaigns. For instance, malicious attachments and link previews have been exploited in spyware incidents targeting activists and reporters. By muting unknown calls, the feature reduces social engineering attempts like vishing scams, where attackers impersonate contacts to extract information. WhatsApp's blog emphasizes that while everyday users benefit from standard encryption, this mode offers "extreme safeguards" for rare, high-sophistication threats.

Similar to competitors' offerings, robust account settings trades convenience for security, limiting app functionality for greater protection. Apple's Lockdown Mode, available since 2022, restricts attachments and browser features, while Google's Android version blocks risky app downloads. Cybersecurity experts have welcomed WhatsApp's step, calling it a "very welcome development" for civil society defenders. The rollout is global on iOS and Android, with full availability expected in coming weeks.

As cyber threats evolve with AI-driven attacks and state-sponsored hacking, features like this empower users to customize defenses. High-risk professionals can now layer protections without switching apps, fostering safer digital communication. However, Meta advises reviewing settings post-activation, as it may block legitimate interactions from new contacts. This move aligns with rising demands for privacy amid global data scandals.

Aisuru Botnet Drives DDoS Attack Volumes to Historic Highs


Currently, the modern internet is characterized by near-constant contention, in which defensive controls are being continuously tested against increasingly sophisticated adversaries. However, there are some instances where even experienced security teams are forced to rethink long-held assumptions about scale and resilience when an incident occurs. 


There has been an unprecedented peak of 31.4 terabits per second during a recent Distributed Denial of Service attack attributed to the Aisuru botnet, which has proven that the recent attack is firmly in that category. 

Besides marking a historical milestone, the event is revealing a sharp change in botnet orchestration, traffic amplification, and infrastructure abuse, demonstrating that threat actors are now capable of generating disruptions at levels previously thought to be theoretical. As a consequence of this attack, critical questions are raised regarding the effectiveness of current mitigation architectures and the readiness of global networks to withstand such an attack.

Aisuru-Kimwolf is at the center of this escalation, a vast array of compromised systems that has rapidly developed into the most formidable DDoS platform to date. Aisuru and its Kimwolf offshoot are estimated to have infected between one and four million hosts, consisting of a diverse array of consumer IoT devices, digital video recorders, enterprise network appliances, and virtual machines based in the cloud. 

As a result of this diversity, the botnet has been able to generate volumes of traffic which are capable of overwhelming critical infrastructure, destabilizing national connectivity, and surpassing the handling capacities of many legacy cloud-based DDoS mitigation services. As far as operational performance is concerned, Aisuru-Kimwolf has demonstrated its consistency in executing hyper-volumetric and packet-intensive campaigns at a scale previously deemed impractical. 

As documented by the botnet, the botnet is responsible for record-breaking flooding reaches 31.4 Tbps, packet rates exceeding 14.1 billion packets per second, and highly targeted DNS-based attacks, including random prefixes and so-called water torture attacks, as well as application-layer HTTP floods that exceed 200 million requests per second. 

As part of these operations, carpet bombing strategies are used across wide areas and packet headers and payload attributes are randomly randomized, a deliberate design choice meant to frustrate signature-based detection and slow automated mitigation. 

The attack usually occurs rapidly and in high intensity bursts that reach peak throughput almost instantly and subside within minutes, creating a hit-and-run attack that makes attribution and response more difficult. 

There was an increase of more than 700 percent in attack potential observed in the Aisuru-Kimwolf ecosystem between the years 2025 and 2026, demonstrating the rapid development of this ecosystem. Aisuru botnets serve as the architectural core of this ecosystem, which are responsible for this activity. 

In addition to serving as a foundational platform, Aisuru enables the development and deployment of derivative variants, including Kimwolf, which extends the botnet's reach and operational flexibility. By continuously exploiting exposed or poorly secured devices in the consumer and cloud environments, the ecosystem has created a globally distributed attack surface reflective of a larger shift in how modern botnets are designed. 

In contrast to the traditional techniques of DDoS relying solely on persistence, Aisuru-based networks emphasize scalability, rapid mobilization, and adaptive attack techniques, signalling the development of an evolving threat model that is reshaping the upper limits of large-scale DDoS attacks. 

Additionally, people have seen a clear shift from long-duration attacks to short-duration, high-intensity attacks that are designed to maximize disruptions while minimizing exposure. There has been a significant decrease in the number of attacks that persist longer than a short period of time, with only a small fraction lasting longer than that period.

There were overwhelmingly three to five billion packets per second at peak for the majority of incidents, while the overall packet rate was overwhelmingly clustered between one and five terabits per second. It reflects a deliberate operational strategy to concentrate traffic within narrowly defined, yet extremely extreme thresholds, with the goal of promoting rapid saturation over prolonged engagement. 

Although these attacks were large in scope, Cloudflare's defenses were automatically able to identify and mitigate them without initiating internal escalation procedures, highlighting the importance of real-time, autonomous mitigation systems in combating modern DDoS threats. 

Although Cloudflare's analysis indicates a notable variation in attack sourcing during the so-called "Night Before Christmas" campaign as compared to previous waves of Aisuru botnet activity originating from compromised IoT devices and consumer routers, Cloudflare's analysis shows a significant change in attack sourcing. 

As part of that wave of activity, Android-based television devices became the primary source of traffic, which highlights how botnet ecosystems continue to engulf non-traditional endpoints. In addition to expanding attack capacity, this diversity of compromised hardware complicates defensive modeling, as traffic originates from devices which blend into legitimate consumer usage patterns, increasing the complexity of defensive modeling. 

These findings correspond to broader trends documented in Cloudflare's fourth-quarter 2025 DDoS Threat Report, which documented a 121 percent increase in attack volume compared with the previous year, totaling 47.1 million incidents. 

A Cloudflare application has been able to mitigate over 5,300 DDoS attacks a day, nearly three quarters of which occurred on the network layer and the remainder targeting HTTP application services. During the final quarter, the number of DDoS attacks accelerated further, increasing by 31 percent from the previous quarter and 58 percent from the previous year, demonstrating a continuing increase in both frequency and intensity. 

A familiar pattern of industry targeting was observed during this period, but it was becoming increasingly concentrated, with telecommunications companies, IT and managed services companies, online gambling platforms and gaming companies experiencing the greatest levels of sustained pressure. Among attack originators, Bangladesh, Ecuador, and Indonesia appeared to be the most frequently cited sites, with Argentina becoming a significant source while Russia's position declined. 

Throughout the year, organizations located in China, Hong Kong, Germany, Brazil, and the United States experienced the largest amount of DDoS attacks, reflecting the persistent focus on regions with dense digital infrastructure and high-value online services. 

According to a review of attack source distribution in the fourth quarter of 2025, there have been notable changes in the geographical origins of malicious traffic, which supports the emergence of a fluid global DDoS ecosystem.

A significant increase was recorded in attack traffic by Bangladesh during the period, displace Indonesia, which had maintained the top position throughout the previous year but subsequently fell to third place. Ecuador ranked second, while Argentina climbed twenty positions to take the fourth position, regaining its first place in attack traffic. 

In addition to Hong Kong, Ukraine, Vietnam, Taiwan, Singapore, and Peru, there were other high-ranking origins, which emphasize the wide international dispersion of attack infrastructure. The relative activity of Russia declined markedly, falling several positions, while the United States also declined, reflecting shifting operational preferences rather than a decline in regional engagement. 

According to a network-level analysis, threat actors continue to favor infrastructure that is scalable, flexible and easy to deploy. A significant part of attacks observed in the past few months have been generated by cloud computing platforms, with providers such as DigitalOcean, Microsoft, Tencent, Oracle, and Hetzner dominating the higher tiers of originating networks with their offerings. 

Throughout the trend, there has been a sustained use of on-demand virtual machines to generate high-volume attack traffic on a short notice basis. In addition to cloud services, traditional telecommunications companies remained prominent players as well, especially in parts of the Asia-Pacific region, including Vietnam, China, Malaysia, and Taiwan.

Large-scale DDoS operations are heavily reliant on both modern cloud environments and legacy carrier infrastructure. The Cloudflare global mitigation infrastructure was able to absorb the unprecedented intensity of the "Night Before Christmas" campaign without compromising service quality. 

In spite of 330 points of presence and a total mitigation capacity of 449 terabits per second, only a small fraction of the total mitigation capacity was consumed, which left the majority of defensive capacity untouched during the record-setting flood of 31.4 Tbps. 

It is noteworthy that detection and mitigation were performed autonomously, without the need for internal alerts or manual intervention, thus underscoring the importance of machine-learning-driven systems for responding to attacks that unfold at a rapid pace. 

As a whole, the campaign illustrates the widening gap between hackers’ growing capability and the defensive limitations of organizations relying on smaller-scale protection services, many of which would have been theoretically overwhelmed by an attack of this magnitude if it had taken place. 

An overall examination of the Aisuru campaign indicates that a fundamental shift has taken place in the DDoS threat landscape, with attack volumes no longer constrained by traditional assumptions about bandwidth ceilings and device types.

The implications for defenders are clear: resilience cannot be treated as a static capability, but must evolve concurrently with adversaries operating at a machine-scale and speed that is increasingly prevalent. 

Due to the complexity of the threats that are becoming more prevalent in the world, organizations have been forced to reevaluate not only their mitigation capabilities, but also the architectural assumptions that lay behind their security strategies, particularly when latency, availability, and trust are essential factors. 

Hypervolumetric attacks are becoming shorter, sharper, and more automated over time. Therefore, effective defense will be dependent on global infrastructure, real-time intelligence, and automated response mechanisms that are capable of absorbing disruptions without human intervention. Accordingly, the Aisuru incident is less of an anomaly and more of a preview of the operational baseline against which modern networks must prepare.

Cloud Storage Scam Uses Fake Renewal Notices to Trick Users


Cybercriminals are running a large-scale email scam that falsely claims cloud storage subscriptions have failed. For several months, people across different countries have been receiving repeated messages warning that their photos, files, and entire accounts will soon be restricted or erased due to an alleged payment issue. The volume of these emails has increased sharply, with many users receiving several versions of the same scam in a single day, all tied to the same operation.

Although the wording of each email differs, the underlying tactic remains the same. The messages pressure recipients to act immediately by claiming that a billing problem or storage limit must be fixed right away to avoid losing access to personal data. These emails are sent from unrelated and randomly created domains rather than official service addresses, a common sign of phishing activity.

The subject lines are crafted to trigger panic and curiosity. Many include personal names, email addresses, reference numbers, or specific future dates to appear genuine. The messages state that a renewal attempt failed or a payment method expired, warning that backups may stop working and that photos, videos, documents, and device data could disappear if the issue is not resolved. Fake account numbers, subscription details, and expiry dates are used to strengthen the illusion of legitimacy.

Every email in this campaign contains a link. While the first web address may appear to belong to a well-known cloud hosting platform, it only acts as a temporary relay. Clicking it silently redirects the user to fraudulent websites hosted on changing domains. These pages imitate real cloud dashboards and display cloud-related branding to gain trust. They falsely claim that storage is full and that syncing of photos, contacts, files, and backups has stopped, warning that data will be lost without immediate action.

After clicking forward, users are shown a fake scan that always reports that services such as photo storage, drive space, and email are full. Victims are then offered a short-term discount, presented as a loyalty upgrade with a large price reduction. Instead of leading to a real cloud provider, the buttons redirect users to unrelated sales pages advertising VPNs, obscure security tools, and other subscription products. The final step leads to payment forms designed to collect card details and generate profit for the scammers through affiliate schemes.

Many recipients mistakenly believe these offers will fix a real storage problem and end up paying for unnecessary products. These emails and websites are not official notifications. Real cloud companies do not solve billing problems through storage scans or third-party product promotions. When payments fail, legitimate providers usually restrict extra storage first and provide a grace period before any data removal.

Users should delete such emails without opening links and avoid purchasing anything promoted through them. Any concerns about storage or billing should be checked directly through the official website or app of the cloud service provider.

Former Google Engineer Convicted in U.S. for Stealing AI Trade Secrets to Aid China-Based Startup

 

A former Google software engineer has been found guilty in the United States for unlawfully taking thousands of confidential Google documents to support a technology venture in China, according to an announcement made by the Department of Justice (DoJ) on Thursday.

Linwei Ding, also known as Leon Ding, aged 38, was convicted by a federal jury on 14 charges—seven counts of economic espionage and seven counts of theft of trade secrets. Prosecutors established that Ding illegally copied more than 2,000 internal Google files containing highly sensitive artificial intelligence (AI) trade secrets with the intent of benefiting the People’s Republic of China (PRC).

"Silicon Valley is at the forefront of artificial intelligence innovation, pioneering transformative work that drives economic growth and strengthens our national security," said U.S. Attorney Craig H. Missakian. "We will vigorously protect American intellectual capital from foreign interests that seek to gain an unfair competitive advantage while putting our national security at risk."

Ding was initially indicted in March 2024 after investigators discovered that he had transferred proprietary data from Google’s internal systems to his personal Google Cloud account. The materials allegedly stolen included detailed information on Google’s supercomputing data center architecture used to train and run AI models, its Cluster Management System (CMS), and the AI models and applications operating on that infrastructure.

The misappropriated trade secrets reportedly covered several critical technologies, including the design and functionality of Google’s custom Tensor Processing Unit (TPU) chips and GPU systems, software that enables chip-level communication and task execution, systems that coordinate thousands of chips into AI supercomputers, and SmartNIC technology used for high-speed networking within Google’s AI and cloud platforms.

Authorities stated that the theft occurred over an extended period between May 2022 and April 2023. Ding, who began working at Google in 2019, allegedly maintained undisclosed ties with two China-based technology firms during his employment, one of which was Shanghai Zhisuan Technologies Co., a startup he founded in 2023. Investigators noted that Ding downloaded large volumes of confidential files in December 2023, just days before resigning from the company.

"Around June 2022, Ding was in discussions to be the Chief Technology Officer for an early-stage technology company based in the PRC; by early 2023, Ding was in the process of founding his own technology company in the PRC focused on AI and machine learning and was acting as the company's CEO," the DoJ said.

The case further alleged that Ding attempted to conceal his actions by copying Google source code into the Apple Notes app on his work-issued MacBook, converting the files into PDFs, and uploading them to his personal Google account. Prosecutors also claimed that he asked a colleague to use his access badge to enter a Google facility, creating the false appearance that he was working from the office while he was actually in China.

The investigation reportedly accelerated in late 2023 after Google learned that Ding had delivered a public presentation in China to prospective investors promoting his startup. According to Courthouse News, Ding’s defense attorney Grant Fondo argued that the information could not qualify as trade secrets because it was accessible to a large number of Google employees. "Google chose openness over security," Fonda said.

In a superseding indictment filed in February 2025, Ding was additionally charged with economic espionage, with prosecutors alleging that he applied to a Beijing-backed Shanghai talent program. Such initiatives were described as efforts to recruit overseas researchers to bolster China’s technological and economic development.

"Ding's application for this talent plan stated that he planned to 'help China to have computing power infrastructure capabilities that are on par with the international level,'" the DoJ said. "The evidence at trial also showed that Ding intended to benefit two entities controlled by the government of China by assisting with the development of an AI supercomputer and collaborating on the research and development of custom machine learning chips."

Ding is set to attend a status conference on February 3, 2026. If sentenced to the maximum penalties, he could face up to 10 years in prison for each trade secret theft charge and up to 15 years for each count of economic espionage.

eScan Antivirus Faces Scrutiny After Compromised Update Distribution


MicroWorld Technologies has acknowledged that there was a breach of its update distribution infrastructure due to a compromise of a server that is used to deliver eScan antivirus updates to end users, which was then used to send an unauthorized file to end users. 

It was reported that the incident took place within a narrow two-hour window on January 20, 2026, in a regional update cluster. It affected only a small fraction of customers who had downloaded updates during that period, and was confined to that cluster. 

Following the analysis of the file, it was confirmed that it was malicious, and this demonstrates how even tightly controlled security ecosystems can be compromised when trust mechanisms are attacked. 

Despite MicroWorld reporting that the affected systems were swiftly isolated, rebuilt from clean baselines, and secured through credential rotation and customer remediation within hours of the incident, the episode took place against the backdrop of escalating cyber risks that are continually expanding. 

An unprecedented convergence of high-impact events took place in January 2026, beginning with a major supply chain breach involving a global antivirus vendor, followed by a technical assault against a European power grid, and the revelation of fresh vulnerabilities in artificial intelligence-driven systems in the first few weeks of January 2026. 

There are a number of developments which have led to industry concerns that the traditional division between defensive software and offensive attack surfaces is eroding, forcing organizations to revisit long-standing assumptions about where trust begins and ends in their security architectures as a result. 

According to further technical analysis, eScan's compromised update channel was directly used to deliver the previously unknown malware, effectively weaponizing a trusted distribution channel that had been trusted. 

A report indicated that multiple security platforms detected and blocked attempted attacks associated with the malicious file the day of its distribution, prompting a quick external scrutiny to take place. It was MicroWorld Technologies who indicated to me that the incident was identified internally on January 20 through a combination of monitoring alerts and customer reports, with the affected infrastructure isolated within an hour of being identified. 

The company issued a security advisory the following day, January 21, as soon as the attack was under control and the situation had been stabilised. In spite of the fact that cybersecurity firm Morphisec later revealed that it had alerted eScan during its own investigation, MicroWorld maintains that containment efforts were already underway when the communication took place. 

The company disputes any suggestion that customers were not informed of the changes, claiming proactive notifications and direct outreach as part of the remediation process to address any concerns. 

A malicious update was launched by a file called Reload.exe, which set off a multi-stage infection sequence on the affected systems through the use of a file called Reload.exe. 

The researchers that conducted the initial analysis reported that the executable modified the local HOSTS file to prevent the delivery of corrective updates from eScan update servers and that this led to a number of client machines experiencing update service errors. 

As part of its persistence strategy, the malware created scheduled tasks, such as CorelDefrag, and maintained communication with external command-and-control infrastructure to retrieve additional payloads, in addition to disrupting operations. 

During the infection process, there was also a secondary malicious component called consctlx.exe written to the operating system, which further embedding the threat within the system. A further detail provided by Morphisec, an endpoint security company, provided a deeper technical insight into the underlying mechanism and intent of the malicious update distributed through the trusted infrastructure of eScan. 

As Morphisec stated in its security bulletin, the compromised update package contained a modified version of the eScan update component Reload.exe that was distributed both to enterprise environments and consumer environments via legitimate update channels. 

Despite the binary's appearance of being signed with eScan's code signing certificate, validation checks conducted by Windows and independent analysis platforms revealed that the signature was not valid. Morphisec's analysis revealed that the altered Reload.exe functions as a loader for a malware framework that consists of several stages. This raises concerns about certificate integrity and abuse of trusted signing processes. 

When the component is executed, it establishes persistence on infected machines, executes arbitrary commands, and alters the Windows HOSTS file to prevent access to eScan's update servers, preventing eScan from releasing updates by using routine update mechanisms.

Additionally, the malware started communicating outwards with a distributed command-and-control infrastructure, thus allowing it to download additional payloads from a variety of different domains and IP addresses in order to increase its reach.

According to Morphisec, the final stage of the attack chain involved the deployment of a second executable, CONSCTLX.exe. This secondary executable acted as both a backdoor and a persistent downloader.

A malicious component that was designed to maintain long-term access created scheduled tasks with benign-sounding names like CorelDefrag that were designed to avoid casual inspection while ensuring that the task would execute across restarts as well. 

The company MicroWorld Technologies developed a remediation utility in response to the incident that is specifically intended to identify and reverse unauthorized changes introduced by the malicious update. Using this tool, the company claims that normal update functionality is restored, a successful cleanup has been verified, and the process only requires a standard reboot of the computer to complete. 

Several companies, including eScan and Morphisec, have advised customers to take additional network-level security measures to protect themselves from further malicious communications during the recovery phase of the campaign by blocking the command-and-control endpoints associated with it. 

In addition, the incident has raised concerns about the recurring exploitation of antivirus update mechanisms, which have caused an increase in industry concern. There was an incident of North Korean threat actors exploiting eScan’s update process in 2024 to install backdoors inside corporate networks, illustrating again how security infrastructure remains one of the most attractive targets for state-sponsored attacks, particularly those aiming for high volumes of information. 

As this breach unfolds, it is part of a wider pattern of consequential supply chain incidents that have taken place in early 2026. These incidents range from destructive malware targeting European energy systems to large-scale intellectual property theft coupled with soon-to-appear AI-driven assault tactics. 

The events highlighted by these events also point to a persistent strategic reality in that organizations are increasingly dependent on trusted vendors and automated updates pipelines. If trust is compromised across the digital ecosystem, defensive technologies can become vectors of systemic risk as a result of a compromise in trust. 

In an industry context, the incident is notable for the unusual method of delivery used by the perpetrators. In spite of the fact that software supply chain compromises have been a growing problem over the past few years, malware is still uncommonly deployed through the security product’s own update channel. 

An analysis of the implants involved indicates that a significant amount of preparation has been performed and that the target environment is well known. A successful operation would have required attackers to have acquired access to eScan’s update infrastructure, reverse engineering aspects of its update workflow, and developing custom malware components designed specifically to function within that ecosystem in order to be successful.

Such prerequisites suggest a deliberate, resource-intensive effort rather than a purely opportunistic one. In addition, a technical examination of the implanted components revealed resilience features that were designed to ensure that attacker access would not be impeded under adverse conditions. 

There were multiple fallback execution paths implemented in the malware, so that continuity would be maintained even if individual persistence mechanisms were disrupted. In one instance, the removal of a scheduled task used to launch a PowerShell payload was not sufficient to neutralize the infection, since the CONSCTLX.exe component would also be able to invoke the same functionality. 

Furthermore, blocking the command-and-control infrastructure associated with the PowerShell stage did not completely eliminate an attacker's capabilities, as CONSCTLX.exe retained the ability to deliver shellcode directly to affected systems, as these design choices highlight the importance of operational redundancy, which is one of the hallmarks of well-planned intrusion campaigns. 

In spite of the sophistication evident in the attack's preparation, the attack's impact was mitigated by its relatively short duration and the techniques used in order to prevent the attack from becoming too effective. 

Modern operating systems have an elevated level of trust when it comes to security software, which means that attackers have theoretically the possibility to exploit more intrusive methods, including kernel-mode implants, which provide attackers with an opportunity to carry out more invasive attacks. 

In this case, however, the attackers relied on user-mode components and commonly observed persistence mechanisms, such as scheduled tasks, which constrained the operation's stealth and contributed to its relatively quick detection and containment, according to analysts. 

It is noteworthy that the behavioral indicators included in eScan's advisory closely correspond with those found by Morphisec independently. Both parties deemed the incident to have a medium-to-high impact on the enterprise environments in question. Additionally, this episode has revealed tensions between the disclosures made by vendors and researchers. 

As reported by Bloomberg News, MicroWorld Technologies has publicly challenged parts of Morphisec's public reporting, claiming some of it was inaccurate. It is understood that they are seeking legal advice in response to these claims. 

It was advised by eScan to conduct targeted checks to determine whether the systems were affected from an operational perspective, including reviewing schedule tasks for anomalous entries, inspecting the system HOSTS file for blocked eScan domains, and reviewing update logs from January 20 for irregularities. 

A remediation utility has been released by the company and is available through its technical support channels. This utility is designed to remove malicious components, reverse unauthorized changes, and restore normal update functionality. 

Consequently, customers are advised to block known command-and-control addresses associated with this campaign as a precaution, reinforcing the lesson of the incident: even highly trusted security infrastructure must continually be examined as potential attack surfaces in a rapidly changing threat environment.

New Reprompt URL Attack Exposed and Patched in Microsoft Copilot

 

Security researchers at Varonis have uncovered a new prompt-injection technique targeting Microsoft Copilot, highlighting how a single click could be enough to compromise sensitive user data. The attack method, named Reprompt, abuses the way Copilot and similar generative AI assistants process certain URL parameters, effectively turning a normal-looking link into a vehicle for hidden instructions. While Microsoft has since patched the flaw, the finding underscores how quickly attackers are adapting AI-specific exploitation methods.

Prompt injection attacks work by slipping hidden instructions into content that an AI model is asked to read, such as emails or web pages. Because large language models still struggle to reliably distinguish between data to analyze and commands to execute, they can be tricked into following these embedded prompts. In traditional cases, this might mean white text on a white background or minuscule fonts inside an email that the user then asks the AI to summarize, unknowingly triggering the malicious instructions.

Reprompt takes this concept a step further by moving the injection into the URL itself, specifically into a query parameter labeled “q.” Varonis demonstrated that by appending a long string of detailed instructions to an otherwise legitimate Copilot link, such as “http://copilot.microsoft.com/?q=Hello”, an attacker could cause Copilot to treat that parameter as if the user had typed it directly into the chat box. In testing, this allowed the researchers to exfiltrate sensitive data that the victim had previously shared with the AI, all triggered by a single click on a crafted link.

This behaviour is especially dangerous because many LLM-based tools interpret the q parameter as natural-language input, effectively blurring the line between navigation and instruction. A user might believe they are simply opening Copilot, but in reality they are launching a session already preloaded with hidden commands created by an attacker. Once executed, these instructions could request summaries of confidential conversations, collect personal details, or send data to external endpoints, depending on how tightly the AI is integrated with corporate systems.

After Varonis disclosed the issue, Microsoft moved to close the loophole and block prompt-injection attempts delivered via URLs. According to the researchers, prompt injection through q parameters in Copilot is no longer exploitable in the same way, reducing the immediate risk for end users. Even so, Reprompt serves as a warning that AI interfaces—especially those embedded into browsers, email clients, and productivity suites—must be treated as sensitive attack surfaces, demanding continuous testing and robust safeguards against new injection techniques.

Google Owned Mandiant Finds Vishing Attacks Against SaaS Platforms


Mandiant recently said that it found an increase in threat activity that deploys tradecraft for extortion attacks carried out by a financially gained group ShinyHunters.

  • These attacks use advanced voice phishing (vishing) and fake credential harvesting sites imitating targeted organizations to get illicit access to victims systems by collecting sign-on (SSO) credentials and two factor authentication codes. 
  • The attacks aim to target cloud-based software-as-a-service (SaaS) apps to steal sensitive data and internal communications and blackmail victims. 

Google owned Mandiant’s threat intelligence team is tracking the attacks under various clusters: UNC6661, UNC6671, and UNC6240 (aka ShinyHunters). These gangs might be improving their attack tactics. "While this methodology of targeting identity providers and SaaS platforms is consistent with our prior observations of threat activity preceding ShinyHunters-branded extortion, the breadth of targeted cloud platforms continues to expand as these threat actors seek more sensitive data for extortion," Mandiant said. 

"Further, they appear to be escalating their extortion tactics with recent incidents, including harassment of victim personnel, among other tactics.”

Theft details

UNC6661 was pretending to be IT staff sending employees to credential harvesting links tricking them into multi-factor authentication (MFA) settings. This was found during mid-January 2026.

Threat actors used stolen credentials to register their own device for MFA and further steal data from SaaS platforms. In one incident, the hacker exploited their access to infected email accounts to send more phishing emails to users in cryptocurrency based organizations.

The emails were later deleted to hide the tracks. Experts also found UNC6671 mimicking IT staff to fool victims to steal credentials and MFA login codes on credential harvesting websites since the start of this year. In a few incidents, the hackers got access to Okta accounts. 

UNC6671 leveraged PowerShell to steal sensitive data from OneDrive and SharePoint. 

Attack tactic 

The use of different domain registrars to register the credential harvesting domains (NICENIC for UNC6661 and Tucows for UNC6671) and the fact that an extortion email sent after UNC6671 activity did not overlap with known UNC6240 indicators are the two main differences between UNC6661 and UNC6671. 

This suggests that other groups of people might be participating, highlighting how nebulous these cybercrime organizations are. Furthermore, the targeting of bitcoin companies raises the possibility that the threat actors are searching for other opportunities to make money.

Visual Prompt Injection Attacks Can Hijack Self-Driving Cars and Drones

 

Indirect prompt injection happens when an AI system treats ordinary input as an instruction. This issue has already appeared in cases where bots read prompts hidden inside web pages or PDFs. Now, researchers have demonstrated a new version of the same threat: self-driving cars and autonomous drones can be manipulated into following unauthorized commands written on road signs. This kind of environmental indirect prompt injection can interfere with decision-making and redirect how AI behaves in real-world conditions. 

The potential outcomes are serious. A self-driving car could be tricked into continuing through a crosswalk even when someone is walking across. Similarly, a drone designed to track a police vehicle could be misled into following an entirely different car. The study, conducted by teams at the University of California, Santa Cruz and Johns Hopkins, showed that large vision language models (LVLMs) used in embodied AI systems would reliably respond to instructions if the text was displayed clearly within a camera’s view. 

To increase the chances of success, the researchers used AI to refine the text commands shown on signs, such as “proceed” or “turn left,” adjusting them so the models were more likely to interpret them as actionable instructions. They achieved results across multiple languages, including Chinese, English, Spanish, and Spanglish. Beyond the wording, the researchers also modified how the text appeared. Fonts, colors, and placement were altered to maximize effectiveness. 

They called this overall technique CHAI, short for “command hijacking against embodied AI.” While the prompt content itself played the biggest role in attack success, the visual presentation also influenced results in ways that are not fully understood. Testing was conducted in both virtual and physical environments. Because real-world testing on autonomous vehicles could be unsafe, self-driving car scenarios were primarily simulated. Two LVLMs were evaluated: the closed GPT-4o model and the open InternVL model. 

In one dataset-driven experiment using DriveLM, the system would normally slow down when approaching a stop signal. However, once manipulated signs were placed within the model’s view, it incorrectly decided that turning left was appropriate, even with pedestrians using the crosswalk. The researchers reported an 81.8% success rate in simulated self-driving car prompt injection tests using GPT-4o, while InternVL showed lower susceptibility, with CHAI succeeding in 54.74% of cases. Drone-based tests produced some of the most consistent outcomes. Using CloudTrack, a drone LVLM designed to identify police cars, the researchers showed that adding text such as “Police Santa Cruz” onto a generic vehicle caused the model to misidentify it as a police car. Errors occurred in up to 95.5% of similar scenarios. 

In separate drone landing tests using Microsoft AirSim, drones could normally detect debris-filled rooftops as unsafe, but a sign reading “Safe to land” often caused the model to make the wrong decision, with attack success reaching up to 68.1%. Real-world experiments supported the findings. Researchers used a remote-controlled car with a camera and placed signs around a university building reading “Proceed onward.” 

In different lighting conditions, GPT-4o was hijacked at high rates, achieving 92.5% success when signs were placed on the floor and 87.76% when placed on other cars. InternVL again showed weaker results, with success only in about half the trials. Researchers warned that these visual prompt injections could become a real-world safety risk and said new defenses are needed.

Ivanti Issues Emergency Fixes After Attackers Exploit Critical Flaws in Mobile Management Software




Ivanti has released urgent security updates for two serious vulnerabilities in its Endpoint Manager Mobile (EPMM) platform that were already being abused by attackers before the flaws became public. EPMM is widely used by enterprises to manage and secure mobile devices, which makes exposed servers a high-risk entry point into corporate networks.

The two weaknesses, identified as CVE-2026-1281 and CVE-2026-1340, allow attackers to remotely run commands on vulnerable servers without logging in. Both flaws were assigned near-maximum severity scores because they can give attackers deep control over affected systems. Ivanti confirmed that a small number of customers had already been compromised at the time the issues were disclosed.

This incident reflects a broader pattern of severe security failures affecting enterprise technology vendors in January in recent years. Similar high-impact vulnerabilities have previously forced organizations to urgently patch network security and access control products. The repeated targeting of these platforms shows that attackers focus on systems that provide centralized control over devices and identities.

Ivanti stated that only on-premises EPMM deployments are affected. Its cloud-based mobile management services, other endpoint management products, and environments using Ivanti cloud services with Sentry are not impacted by these flaws.

If attackers exploit these vulnerabilities, they can move within internal networks, change system settings, grant themselves administrative privileges, and access stored information. The exposed data may include basic personal details of administrators and device users, along with device-related information such as phone numbers and location data, depending on how the system is configured.

Ivanti has not provided specific indicators of compromise because only a limited number of confirmed cases are known. However, the company published technical analysis to support investigations. Security teams are advised to review web server logs for unusual requests, particularly those containing command-like input. Exploitation attempts may appear as abnormal activity involving internal application distribution or Android file transfer functions, sometimes producing error responses instead of successful ones. Requests sent to error pages using unexpected methods or parameters should be treated as highly suspicious.

Previous investigations show attackers often maintain access by placing or modifying web shell files on application error pages. Security teams should also watch for unexpected application archive files being added to servers, as these may be used to create remote connections back to attackers. Because EPMM does not normally initiate outbound network traffic, any such activity in firewall logs should be treated as a strong warning sign.

Ivanti advises organizations that detect compromise to restore systems from clean backups or rebuild affected servers before applying updates. Attempting to manually clean infected systems is not recommended. Because these flaws were exploited before patches were released, organizations that had vulnerable EPMM servers exposed to the internet at the time of disclosure should treat those systems as compromised and initiate full incident response procedures rather than relying on patching alone. 

CRIL Uncovers ShadowHS: Fileless Linux Post-Exploitation Framework Built for Stealthy Long-Term Access

 

Operating entirely in system memory, Cyble Research & Intelligence Labs (CRIL) uncovered ShadowHS, a Linux post-exploitation toolkit built for covert persistence after an initial breach. Instead of dropping binaries on disk, it runs filelessly, helping it bypass standard security checks and leaving minimal forensic traces. ShadowHS relies on a weaponized version of hackshell, enabling attackers to maintain long-term remote control through interactive sessions. This fileless approach makes detection harder because many traditional tools focus on scanning stored files rather than memory-resident activity. 

CRIL found that ShadowHS is delivered using an encrypted shell loader that deploys a heavily modified hackshell component. During execution, the loader reconstructs the payload in memory using AES-256-CBC decryption, along with Perl byte skipping routines and gzip decompression. After rebuilding, the payload is executed via /proc//fd/ with a spoofed argv[0], a method designed to avoid leaving artifacts on disk and evade signature-based detection tools. 

Once active, ShadowHS begins with reconnaissance, mapping system defenses and identifying installed security tools. It checks for evidence of prior compromise and keeps background activity intentionally low, allowing operators to selectively activate functions such as credential theft, lateral movement, privilege escalation, cryptomining, and covert data exfiltration. CRIL noted that this behavior reflects disciplined operator tradecraft rather than opportunistic attacks. 

ShadowHS also performs extensive fingerprinting for commercial endpoint tools such as CrowdStrike, Tanium, Sophos, and Microsoft Defender, as well as monitoring agents tied to cloud platforms and industrial control environments. While runtime activity appears restrained, CRIL emphasized the framework contains a wider set of dormant capabilities that can be triggered when needed. 

A key feature highlighted by CRIL is ShadowHS’s stealthy data exfiltration method. Instead of using standard network channels, it leverages user-space tunneling over GSocket, replacing rsync’s default transport to move data through firewalls and restrictive environments. Researchers observed two variants: one using DBus-based tunneling and another using netcat-style GSocket tunnels, both designed to preserve file metadata such as timestamps, permissions, and partial transfer state. 

The framework also includes dormant modules for memory dumping to steal credentials, SSH-based lateral movement and brute-force scanning, and privilege escalation using kernel exploits. Cryptomining support is included through tools such as XMRig, GMiner, and lolMiner. ShadowHS further contains anti-competition routines to detect and terminate rival malware like Rondo and Kinsing, as well as credential-stealing backdoors such as Ebury, while checking kernel integrity and loaded modules to assess whether the host is already compromised or under surveillance.

CRIL concluded that ShadowHS highlights growing challenges in securing Linux environments against fileless threats. Since these attacks avoid disk artifacts, traditional antivirus and file-based detection fall short. Effective defense requires monitoring process behavior, kernel telemetry, and memory-resident activity, focusing on live system behavior rather than static indicators.

Malicious Chrome Extensions Hijack Affiliate Links and Steal ChatGPT Tokens

 

Cybersecurity researchers have uncovered a alarming surge in malicious Google Chrome extensions that hijack affiliate links, steal sensitive data, and siphon OpenAI ChatGPT authentication tokens. These deceptive add-ons, masquerading as handy shopping aids and AI enhancers, infiltrate the Chrome Web Store to exploit user trust. Disguised tools like Amazon Ads Blocker from "10Xprofit" promise ad-free browsing but secretly swap creators' affiliate tags with the developer's own, robbing influencers of commissions across Amazon, AliExpress, Best Buy, Shein, Shopify, and Walmart.

Socket Security identified 29 such extensions in this cluster, uploaded as recently as January 19, 2026, which scan product URLs without user interaction to inject tags like "10xprofit-20." They also scrape product details to attacker servers at "app.10xprofit[.]io" and deploy fake "LIMITED TIME DEAL" countdowns on AliExpress pages to spur impulse buys. Misleading store listings claim mere "small commissions" from coupons, violating policies that demand clear disclosures, user consent for injections, and single-purpose designs.

Broadcom's Symantec separately flagged four data-thieving extensions with over 100,000 installs, including Good Tab, which relays clipboard access to "api.office123456[.]com," and Children Protection, which harvests cookies, injects ads, and executes remote JavaScript. DPS Websafe hijacks searches to malicious sites, while Stock Informer exposes users to an old XSS flaw (CVE-2020-28707). Researchers Yuanjing Guo and Tommy Dong stress caution even with trusted sources, as broad permissions enable unchecked surveillance.

LayerX exposed 16 coordinated "ChatGPT Mods" extensions—downloaded about 900 times—that pose as productivity boosters like voice downloaders and prompt managers. These inject scripts into chatgpt.com to capture session tokens, granting attackers full account access to conversations, metadata, and code. Natalie Zargarov notes this leverages AI tools' high privileges, turning trusted brands into deception vectors amid booming enterprise AI adoption.

Compounding risks, the "Stanley" malware-as-a-service toolkit, sold on Russian forums for $2,000-$6,000, generates note-taking extensions that overlay phishing iframes on bank sites while faking legitimate URLs. Premium buyers get Chrome Store approval guarantees and C2 panels for victim management; it vanished January 27, 2025, post-exposure but may rebrand. Varonis' Daniel Kelley warns browsers are now prime endpoints in BYOD and remote setups.

Users must audit extensions for mismatched features, excessive permissions, and vague disclosures—remove suspects via Chrome settings immediately. Limit installs to verified needs, favoring official apps over third-party tweaks. As e-commerce and AI extensions multiply, proactive vigilance thwarts financial sabotage and data breaches in this evolving browser battlefield.