Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

The Shift from Cyber Defense to Recovery-Driven Security


 

There has been a structural recalibration of cybersecurity strategies as organizations recognize that breaches impact operations, finances, and reputation in ways that extend far beyond the moment of intrusion. 

Incidents that once remained within the domain of IT are now affecting the entire organization, with containment cycles lasting up to months and remediation costs reaching tens of millions for large-scale breaches. 

Leaders in response are shifting their focus from absolute prevention to sustained operational continuity, recognizing that resilience is not defined by the absence of attacks, but rather by the capability of recovering quickly and precisely. 

The shift is driving a renewed focus on creating integrated cyber resilience frameworks that align business continuity objectives with security controls, ensuring critical systems remain recoverable even after active compromises. There is also a disconnect between security enforcement and operational accessibility resulting from this evolution. 

The cybersecurity function has historically prioritized perimeter hardening and strict authentication, whereas business operations demand uninterrupted data availability with minimal friction to operate. With increasing threat landscapes and competing priorities, these priorities are convergent, often revealing inefficiencies, in which layered authentication mechanisms, while indispensable, inadvertently delay recovery workflows and extend downtime during critical incidents.

By integrating adaptive intelligence and automation into Zero Trust architectures, this divide is beginning to be reconciled. The approach organizations are taking is to design environments where continuous verification is co-existing with streamlined restoration capabilities rather than treating security and recovery as opposing forces. 

Zero Trust, at its core, is a strategic model rather than a single technology that requires rigorous, context-aware authentication utilizing multiple data points prior to granting access. In combination with intelligent recovery systems, this approach is redefining resilience by enabling secure access without compromising recovery agility, resulting in high-assurance environments that are able to maintain operations even under persistent threat circumstances. 

With the increased sophistication of ransomware campaigns, conventional backup-centric strategies are revealing their limitations, as adversaries increasingly design attacks that extend beyond the initial system compromises. Threat actors execute long reconnaissance phases during many incidents, mapping enterprise environments, identifying high-value assets, and, critically, locating backups and undermining them before encrypting or destroying data.

By intentionally targeting a variety of entities, cybercrime has evolved into a coordinated and enterprise-like environment where operational disruption is designed to maximize leverage. Attackers effectively eliminate an organization's ability to restore from trusted states when they compromise recovery pathways, amplifying downtime and causing an increase in financial and regulatory risk. 

Due to this inevitability, forward-looking organizations are repositioning their security postures to reflect this inevitability, incorporating defensive controls into a more holistic security model that includes assured recoverability. As part of this approach, cyber resilience and cyber recovery are integrated, where the objective is to not only withstand intrusion attempts but to maintain data integrity, availability, and rapid restoration under adversarial circumstances. 

The modern cyber recovery architectures are reflecting these evolving threat dynamics by incorporating resilience as an integral part of their development, repositioning data protection from a passive safeguard to an active line of defense. Hardened recovery frameworks are becoming increasingly popular among organizations, which include air-gapped vaulting and immutable storage, in order to ensure backup data is not susceptible to adversarial manipulation while enabling integrity validation before restoration through advanced malware scanning. 

A controlled virtual environment is used to test recovery processes isolated from one another, along with point-in-time restoration capabilities that are capable of restoring systems back to a known, uncompromised state with minimal operational disruptions as a complement to this. 

Separate recovery enclaves are also crucial to preventing lateral movement and credential-based compromise, as backup infrastructure is decoupled from production networks, thus eliminating lateral movement pathways. This architecture ensures that security and compliance requirements are not treated as an afterthought but are integrally integrated, supported by comprehensive audit trails, tagging of data, and a verifiable chain of custody. These capabilities together provide organizations with a structured, audit-ready recovery posture that maintains business continuity, even under sustained cyber pressure, a transition from reactive incident response.

In an effort to maintain continuous visibility into backup repository integrity and behavior, organizations are extending the focus beyond safeguarding backup repositories in their resilience frameworks. There is an increasing trend among threat actors to employ persistence-driven techniques that alter backup configurations or introduce incremental data corruption to erode reliable recovery points over time—often without triggering immediate alerts. 

Unless granular monitoring is employed, manipulations of this kind can be undetected until the recovery process has been initiated, at which point recovery pathways may already be compromised. It is for this reason that enterprises are integrating advanced telemetry, behavioral analytics, and anomaly detection in backup ecosystems, enabling early detection of irregular access patterns, unauthorized configuration changes, and deviations in data consistency. 

By enhancing proactive visibility, enterprises can not only respond more quickly to incidents but also prevent adversaries from dismantling recovery capabilities silently. Rapid recovery is of little value if latent threats are reintroduced into production environments. 

Furthermore, it is important to ensure that recovered data is intact and uncompromised. In this regard, organizations are integrating validation layers, such as isolated forensic sandboxes and automated recovery testing, to verify backup integrity well in advance of a loss. 

By implementing a comprehensive architectural shift in which recovery is engineered as a fundamental capability instead of a reactive measure, enterprises are positioned to sustain operations with minimal disruption by embedding immutability, isolation, continuous monitoring, and trusted validation into data protection strategies from conception. 

Consequently, resilience is no longer based on the ability to evade every attack, but rather on the ability to restore systems as quickly and precisely as possible, especially when defenses have been breached inevitably. Cybersecurity effectiveness is no longer defined by absolute prevention, but rather by the assurance that controlled, reliable recovery can be achieved under adverse circumstances. 

A growing number of adversaries continue to develop techniques that bypass traditional defenses and target recovery mechanisms themselves, forcing organizations to adopt a design philosophy based on the expectation of compromise rather than treating compromise as an exception. 

In order to maintain operational continuity, it is imperative that security postures, continuous monitoring, and resilient recovery architectures are integrated cohesively. In order to mitigate the cascading impact of cyber incidents, enterprises should align detection capabilities with verified restoration processes and embed trust throughout the recovery lifecycle. 

The key to establishing resilience is not eliminating risk, but rather abiding by its ability to absorb disruption, restore critical systems with integrity, and sustain business operations without interruption in a world where cyber incidents have become an operational certainty rather than simply a possibility.

AI Was Meant to Help. So Why Is It Making Work Harder for Women in Indonesia?

 



Artificial intelligence is often presented as a neutral and forward-looking force that improves efficiency and removes human bias from decision-making. In practice, however, many women working in Indonesia’s gig economy experience these systems very differently. Rather than easing workloads, AI-driven platforms are intensifying existing pressures.

Recent research examining female gig workers introduces the concept of “AI colonialism.” This idea describes how older patterns of domination continue through digital systems. In this framework, powerful technology actors, largely based in wealthier regions, extract labour, data, and economic value from workers in developing countries, reinforcing unequal global relationships. The structure resembles historical colonial systems, but operates through algorithms and platforms instead of direct political control.

In Indonesia, platforms such as Gojek, Grab, Maxim, and Shopee rely heavily on informal workers. These companies have not transformed the nature of employment. Instead, they have digitised an already informal labour market. Workers are labelled as independent “partners,” which excludes them from basic protections such as minimum wages, paid sick leave, and maternity benefits. Earnings depend entirely on the number of completed tasks and algorithm-based performance scores.

For women, this structure intersects with what is often described as the “double burden,” where paid work must be balanced alongside unpaid domestic responsibilities. One delivery worker, Lia, begins her day before sunrise by preparing meals and organising her children’s routines. Only after completing these responsibilities can she log into the platform. As she explains, the system recognises only whether she is online, not the constraints shaping her availability.

Platform algorithms prioritise continuous, uninterrupted activity. Incentive systems often require completing a fixed number of orders within strict time windows. For workers managing caregiving roles, this creates structural disadvantages. Logging off to attend to family responsibilities can result in lost bonuses, while reducing work hours due to fatigue or health issues leads to declining performance metrics.

This reflects a greater economic reality in which unpaid domestic labour underpins the formal economy without recognition or compensation. Instead of addressing this imbalance, AI systems can intensify it. Another worker, Cinthia, observed a noticeable drop in job assignments after taking time off due to illness. The experience created a sense that the system penalises any interruption, making workers reluctant to pause even when necessary.

Although algorithms do not explicitly target women, they are designed around an ideal worker who is always available and unconstrained by caregiving duties. This assumption produces indirect but consistent disadvantage. The claim that digital platforms operate neutrally is further challenged by everyday experiences. For example, a driver named Yanti often informs passengers in advance that she is female, leading to frequent cancellations. While the system records these cancellations, it does not capture the gender bias behind them.

Safety concerns also shape participation. Many women avoid working late hours due to risk, which limits access to peak-demand periods and higher earnings. The system interprets this reduced availability as lower productivity. Scholars such as Virginia Eubanks have argued that automated systems frequently replicate and amplify existing social inequalities rather than eliminate them.

Similar patterns have been observed in other countries. In India, women working in ride-hailing services report lower average earnings, partly because safety considerations influence when and where they work. Algorithms, however, measure output without accounting for these risks.

Safety challenges persist even within delivery roles. Around 90% of women in group discussions reported choosing delivery work over ride-hailing due to perceived safety advantages, yet harassment remains a concern from both customers and other drivers. During the COVID-19 pandemic, gig workers were classified as essential, but their incomes declined sharply, in some cases by up to 67% in early 2020. To compensate, many worked more than 13 hours a day. Despite these conditions, platform performance systems remained unchanged, and illness-related breaks often resulted in lower ratings.

This inflicts a deeper impact in the contemporary labour control, where oversight is embedded within digital systems rather than managed by human supervisors. AI colonialism, in this sense, extends beyond ownership to the structure of control itself. Workers provide labour, time, and data, while platforms retain authority over decision-making processes.

In response, women workers have developed informal networks through messaging platforms to share information, warn others about unsafe situations, and adapt to algorithmic changes. They support each other by increasing activity on inactive accounts, lending money for operational costs, and collectively responding to account suspensions. When harassment occurs, information is circulated quickly to protect others.

These practices represent a form of mutual support rooted in shared vulnerability. Rather than relying on formal recognition as employees, many women build systems of protection among themselves. This surfaces a form of everyday resistance, where collective action becomes a strategy for navigating structural constraints.

Artificial intelligence is not inherently exploitative. However, when deployed within unequal economic systems, it can reinforce patterns of extraction and imbalance. As digital platforms continue to expand, understanding the lived experiences of workers, particularly women in developing economies, is essential. Behind every efficient system is a human reality shaped by trade-offs between income, safety, and dignity.


Rival Ransomware Gangs 0APT And Krybit Clash In Unusual Cyber Extortion Battle

 

A clash almost unseen among digital outlaws has begun - 0APT, a hacking collective, now warns it will unmask operatives from enemy faction Krybit. This shift came to light through surveillance of hidden online forums. Tension simmers beneath the surface of these underground circles. Rival gangs once operating in parallel seem to fracture under pressure. Trust, usually scarce, is vanishing faster than usual. Evidence points toward escalating friction inside ransomware communities. 

What began as covert threats may reshape alliances unexpectedly. Reports indicate 0APT sent a threat to Krybit, insisting on payment under risk of exposing private records - names, positions, operational files - if ignored. A limited set of claimed stolen materials was published shortly after, serving as evidence - a move mirroring classic dual-pressure methods seen in attacks on businesses. Yet using such an approach toward another illicit network stirs doubt around its real impact, given that public image matters little within hidden communities. 

Even so, the danger remains somewhat real. Because cybercrime networks depend on staying hidden, revealed identities might invite legal trouble or revenge attacks. From the exposed information, security analysts pulled login details tied to Krybit members - alongside digital currency wallets - hinting at weak points in how the group functions. Yet the full impact stays unclear. Now showing a blank page, Krybit's site now displays only a standard upkeep notice, hinting at disruptions tied to recent events. Little is known about the collective so far, mainly because big security analysts have published almost nothing on them - possibly a sign they are just beginning operations. 

On the opposite end, 0APT emerged around spring 2026 and gained attention fast, marked by complex tools and methods, even though some doubt surrounds how truthful their early reports of breaches really were. Odd as it seems, infighting among hackers has happened before. Earlier clashes included DragonForce going after opponents - BlackLock, then Mamona - by altering web pages and exposing private messages. 

In much the same way, activity aimed at RansomHub tied back to DragonForce, revealing ongoing friction between ransomware crews. This conflict taking shape between 0APT and Krybit signals changes in how cybercriminals operate - motives like money, dominance, and competition now spark open clashes. With ransomware networks evolving fast, these kinds of face-offs might happen more often, making it harder for security experts to follow the players involved.

UAE Businesses Warned of Escalating AI‑Powered Cyber Threats

 

UAE businesses are being urgently warned about a sharp rise in AI‑powered cyber threats that can compromise systems within hours, and sometimes even minutes, if organisations remain unprepared. Cybercriminals are increasingly using artificial intelligence to craft highly realistic phishing emails, deepfake voice and video impersonations, and automated attacks that exploit gaps in security before teams can respond. 

Nature of AI‑driven threats 

Attackers are leveraging generative AI to personalize scams at scale, including cloned emails, synthetic voices, and fake video calls that mimic senior executives or partners. These AI‑enabled methods make spear‑phishing and impersonation fraud far more convincing, increasing the chances that employees will authorise fraudulent transfers or share sensitive credentials. 

AI tools now allow adversaries to perform reconnaissance, scan for vulnerabilities, and launch password‑guessing and ransomware attacks in a fraction of the time it once took. Security experts note that many organisations now face same‑day compromises, where attackers move from initial access to data theft or system encryption within a single business day.

Impact on UAE firms and the economy 

The UAE’s role as a regional financial and technology hub makes it a prime target for state‑backed and criminal hacking groups that use AI to intensify their campaigns.Breaches can lead to substantial financial losses, reputational damage, regulatory penalties, and disruption of critical services, especially as digital‑government and smart‑city initiatives expand.

Cyber professionals recommend continuous staff training on spotting AI‑powered phishing and impersonation, tightening access controls, securing machine identities, and maintaining tested incident‑response and recovery plans. With AI adoption accelerating across industries, firms that act quickly to strengthen cyber resilience will be better positioned to withstand the next wave of AI‑enhanced cyber threats in the UAE.

Pre Stuxnet Fast16 Threat Revealed Targeting Engineering Environments


 

New discoveries regarding early stages of cyber sabotage are changing the historical timeline of offensive digital operations and revealing that sophisticated disruption techniques were developed well before they became widely popular. 

An undocumented malware framework that was discovered in the mid-2000s underscores the extent to which threat actors were already manipulating industrial and engineering systems with precision, laying the foundations for highly specialized cyber weapons that would develop later in time. 

A Lua-based malware framework, named fast16, which predates the outbreak of the Stuxnet worm by several years has been identified by cybersecurity researchers based on this context. According to a detailed analysis published by SentinelOne, the framework originated around 2005, with its operational focus focused on engineering and calculation software with high precision. 

The fast16 algorithm was designed rather than causing immediate system failure to introduce inaccuracies that propagate across interconnected environments by subtly corrupting computational outputs. With its lightweight scripting capabilities and seamless integration with C/C++, Lua is an excellent choice for modular malware development, allowing attackers to extend functionality without recompiling core components. 

Upon analyzing fast16, researchers identified distinct Lua artifacts, including bytecode signatures beginning with /x1bLua and environmental markers such as LUA_PATH, which allowed them to trace svcmgmt.exe, a sample which initially appeared benign, but ultimately appeared to be a part of the early attack framework.

Researchers Vitaly Kamluk and Juan Andrés Guerrero-Saade concluded that the malware's architecture suggested a deliberate intent to spread disruption through self-propagation mechanisms, effectively standardizing erroneous results across entire facilities through self-propagation mechanisms. This approach is a reflection of an early understanding of systemic compromise, which emphasizes data integrity rather than availability as the primary attack vector. 

Fast16 is estimated to have emerged at least five years before Stuxnet, widely regarded as the first digital weapon designed for physical disruption of the world. While fast16 offers a compelling precedent, despite the historical association between Stuxnet and state-sponsored efforts to disrupt Iran's nuclear infrastructure and later influence Duqu and other tools.

The report demonstrates that conceptual basis for cyber-physical sabotage had already been explored in earlier, less visible campaigns, suggesting a more advanced and complex evolution of offensive cyber capabilities than previously assumed. Further reverse engineering confirmed that fast16 did not conform to typical malware engineering patterns observed in the mid-2010s. 

In response to Vitaly Kamluk's observation, several implementation choices indicated that the project was developed much earlier than it was actually implemented, a view that SentinelOne later reinforced by environmental and code-level constraints. 

The sample exhibits compatibility limitations consistent with legacy systems, which can only be executed reliably on Windows XP and single-core processors, which were pre-existing when multi-core consumer processors were introduced by Intel in 2006.

In accordance with behavioral analysis, the implant implements a kernel-level component, fast16.sys, in conjunction with worm-like propagation routines to establish persistence. Moreover, its architecture predates other advanced threats such as Flame, as well as being among the earliest known examples of a Windows-based malware that embeds a Lua virtual machine as an integral component. 

Initially identified as a generic service wrapper, the svcmgmt.exe executable appears to have originated the framework. However, it was later discovered to contain the Lua 5.0 runtime and encrypted bytecode payload, which formed the framework. As indicated by the timestamp metadata, the build date is August 2005, and the submission to VirusTotal was more than a decade later, further supporting the fact that the program has a long history.

In an in-depth inspection, it was revealed that Windows NT subsystems were tightly integrated, including direct interaction with the file system, registry, service control, and networking APIs. In addition to the Lua bytecode containing the core execution logic, an associated driver whose PDB path dates July 2005 enables interception and manipulation of executable data while the data is being read from the disk, an advanced stealth and control technique. 

Additionally, references to "fast16" have been found within driver lists associated with sophisticated intrusion toolsets reportedly linked to the National Security Agency, which were disclosed by Shadow Brokers. By combining technical lineage with leaked operational tooling, this intersecting information further exacerbates the ambiguity surrounding the framework's origins, highlighting its significance within the early development of cyber-physical attack methodologies. 

Further analysis positions svcmgmt.exe as the operational core of the framework, operating as a highly flexible carrier that can adapt execution paths depending on runtime conditions. SentinelOne asserts that embedded forensic markers, particularly a path in the PDB, establish a link between the sample and deconfliction signatures which were revealed in leaks attributed to tools used by the National Security Agency, suggesting that the origin is far more sophisticated. 

From an architectural perspective, the module consists of three components: Lua bytecode controlling configuration and propagation logic, a dynamic library that assists with configuration, and a kernel-level driver (fast16.sys) that performs low-level manipulations. After installation of the malware as a Windows service, it can elevate privileges by activating the kernel implant and initiating a controlled propagation routine that targets legacy Windows environments with weak authentication controls once deployed. 

There is a particular emphasis on operational stealth in its conditional execution, which either occurs manually or when specific security products are detected through registry inspections, indicating an early but deliberate effort to extend its spread. On a functional level, the kernel driver represents the framework's sabotage capability, intercepting executable flows and modifying them according to rule-based rules, especially against binaries compiled using Intel C/C++ tools. As a result, the outputs of high-precision engineering and simulation platforms such as LS-DYNA, PKPM, and MOHID can be precisely manipulated. 

Through the introduction of subtle, systematic deviations into mathematical models, this malware can negatively impact simulation accuracy, undermine research integrity, and affect real-world engineering outcomes over the long term. Further enhancement of situational awareness is provided by supporting modules; for example, a network monitoring component logs connection information through Remote Access Service hooks, strengthening the framework's surveillance capabilities.

Modular separation of a stable execution wrapper from encrypted, task-specific payloads promotes a reusable design philosophy, thus allowing operators to tailor deployments while maintaining a stable outer binary footprint. As a result of these findings, the timeline for cyber-physical attacks has been significantly revised in comparison to the broader threat landscape. 

A correlation with artifacts released by the Shadow Brokers, as well as a correlation with early offensive toolchains, suggest that capabilities often associated with later campaigns, including Stuxnet, were being developed and could have been deployed years earlier. As a result, fast16 is no longer merely an isolated discovery, but also a transitional framework bridging covert early stage experimentation with the more visible development of advanced persistent threats.

During the period covered by this paper, state-aligned actors operationalized long-term, precision-focused sabotage strategies well before such activities became public knowledge, a year in which software became a major tool for influencing physical systems on a strategic level. 

A number of factors, including the emergence of fast16, reframe long-held assumptions about the origins of cyberphysical sabotage, demonstrating that highly targeted, computation-focused attack models were operational well in advance of their public recognition. This modular design, selective propagation logic, and precision-driven payloads demonstrate a maturity typically associated with advanced persistent threat campaigns of a later stage.

The report emphasizes, in addition to its strategic significance, the shift away from disruptive attacks that target system availability to covert manipulation of data integrity within critical engineering environments. 

Fast16 is therefore both an historical anomaly and the prototype of modern state-aligned cyber operations, in which subtle interference can have a far-reaching impact without immediate detection within critical engineering environments.

Google Chrome Introduces “Skills” to Reuse AI Prompts Across Web Pages with Gemini Integration

 

Google has announced a new wave of AI-powered enhancements for its Chrome browser, unveiling a feature called “Skills.” This addition enables users to store and reuse their preferred AI prompts across different websites, eliminating the need to repeatedly type them.

The new functionality builds on Chrome’s integration with Gemini, which arrived as competition in the browser space heats up with offerings from companies like OpenAI (Atlas), Perplexity (Comet), and The Browser Company (Dia).

Gemini already enables users to interact with web pages by asking questions, generating summaries, or completing tasks. With the addition of Skills, users can now save frequently used prompts and activate them instantly whenever needed.

For example, Google notes that users who regularly ask Gemini for vegan alternatives while browsing recipes can save that instruction as a Skill and apply it seamlessly across multiple sites. These prompts can be saved directly from chat history and later accessed by typing a forward slash (/) or clicking the plus (+) icon. Once selected, the Skill executes on the current page and can also extend to other selected tabs.

Google highlighted that Skills remain flexible, allowing users to modify them at any time. Early testing showed that adopters used the feature for tasks such as tracking nutrition metrics in recipes, comparing products while shopping, and summarizing long-form content.

To simplify onboarding, Google is also launching a Skills library featuring ready-made prompts for common use cases like productivity, budgeting, shopping, and cooking. Users can add these pre-built Skills to their collection and customize them as needed.

Similar to other Gemini-powered actions in Chrome, the browser will request user approval before carrying out sensitive tasks, such as sending emails or scheduling calendar events.

The rollout of Skills begins today for desktop Chrome users logged into their Google accounts. Initially, the feature will only be available when the browser language is set to English (US).

New Malware “Storm” Steals Browser Data and Hijacks Sessions Without Passwords

 



A newly identified infostealer called Storm has emerged on underground cybercrime forums in early 2026, signalling a change in how attackers steal and use credentials. Priced at under $1,000 per month, the malware collects browser-stored data such as login credentials, session cookies, and cryptocurrency wallet information, then covertly transfers the data to attacker-controlled servers where it is decrypted outside the victim’s system.

This change becomes clearer when compared to earlier techniques. Traditionally, infostealers decrypted browser credentials directly on infected machines by loading SQLite libraries and accessing local credential databases. Because of this, endpoint security tools learned to treat such database access as one of the strongest indicators of malicious activity.

The approach began to break down after Google Chrome introduced App-Bound Encryption in version 127 in July 2024. This mechanism tied encryption keys to the browser environment itself, making local decryption exponentially more difficult. Initial bypass attempts relied on injecting into browser processes or exploiting debugging protocols, but these techniques still generated detectable traces.

Storm avoids this entirely by skipping local decryption. Instead, it extracts encrypted browser files and quietly sends them to attacker infrastructure, removing the behavioural signals that endpoint tools typically rely on. It extends this model by supporting both Chromium-based browsers and Gecko-based browsers such as Firefox, Waterfox, and Pale Moon, whereas tools like StealC V2 still handle Firefox data locally.

The data collected includes saved passwords, session cookies, autofill entries, Google account tokens, payment card details, and browsing history. This combination gives attackers everything required to rebuild authenticated sessions remotely. In practice, a single compromised employee browser can provide direct access to SaaS platforms, internal systems, and cloud environments without triggering any password-based alerts.

Storm also automates session hijacking. Once decrypted, credentials and cookies appear in the attacker’s control panel. By supplying a valid Google refresh token along with a geographically matched SOCKS5 proxy, the platform can silently recreate the victim’s active session.

This technique aligns with earlier research by Varonis Threat Labs. Its Cookie-Bite study showed that stolen Azure Entra ID session cookies can bypass multi-factor authentication, granting persistent access to Microsoft 365. Similarly, its SessionShark analysis demonstrated how phishing kits intercept session tokens in real time to defeat MFA protections. Storm packages these methods into a commercial subscription service.

Beyond credentials, the malware collects files from user directories, extracts session data from applications like Telegram, Signal, and Discord, and targets cryptocurrency wallets through browser extensions and desktop applications. It also gathers system information and captures screenshots across multiple monitors. Most operations run in memory, reducing the likelihood of detection.

Its infrastructure design adds resilience. Operators connect their own virtual private servers to Storm’s central system, routing stolen data through infrastructure they control. This setup limits the impact of takedowns, as enforcement actions are more likely to affect individual operator nodes rather than the core service.

Storm supports multi-user operations, allowing teams to divide responsibilities such as log access, malware build generation, and session restoration. It also automatically categorises stolen credentials by service, with visible rules for platforms including Google, Facebook, Twitter/X, and cPanel, helping attackers prioritise targets.

At the time of analysis, the control panel displayed 1,715 log entries linked to locations including India, the United States, Brazil, Indonesia, Ecuador, and Vietnam. While it is unclear whether all entries represent real victims or test data, variations in IP addresses, internet service providers, and data volumes suggest ongoing campaigns.

The logs include credentials associated with platforms such as Google, Facebook, Twitter/X, Coinbase, Binance, Blockchain.com, and Crypto.com. Such information often feeds into underground credential marketplaces, enabling account takeovers, fraud, and more targeted intrusions.

Storm is offered through a tiered pricing model: $300 for a seven-day trial, $900 per month for standard access, and $1,800 per month for a team licence supporting up to 100 operators and 200 builds. Use of an additional crypter is required. Notably, once deployed, malware builds continue operating even after a subscription expires, allowing ongoing data collection.

Security researchers view Storm as part of a broader evolution in credential theft. By shifting decryption to remote servers, attackers avoid detection mechanisms designed to identify on-device activity. At the same time, session cookie theft is increasingly replacing password theft as the primary objective.

The data collected by such tools often marks the beginning of further attacks, including logins from unusual locations, lateral movement within networks, and unauthorised access patterns.


Indicators of compromise include:

Alias: StormStealer

Forum ID: 221756

Registration date: December 12, 2025

Current version: v0.0.2.0 (Gunnar)

Build details: Developed in C++ (MSVC/msbuild), approximately 460 KB in size, targeting Windows systems


This advent of Storm underlines how cybercriminal tools are becoming more advanced, automated, and difficult to detect, requiring organisations to strengthen monitoring of sessions, user behaviour, and access patterns rather than relying solely on traditional credential protection methods.


AI-Driven Hack Breach Hits Government Agencies

 

A lone attacker reportedly used Claude and GPT-4.1 to breach nine Mexican government agencies, exposing data tied to 195 million citizens and showing how generative AI can accelerate cybercrime. The incident, which ran from December 2025 to February 2026, is a stark warning that AI can now amplify a single operator into something closer to a full attack team. 

Between late 2025 and early 2026, the attacker used Claude Code to carry out about 75% of remote commands during the intrusion. Researchers found 1,088 prompts across 34 active sessions, which led to 5,317 AI-executed commands on live victim systems. That level of automation meant the attacker could move through government networks far faster than a human-only workflow would allow.

The operation did not rely on one model alone. When Claude encountered limits, the attacker turned to ChatGPT for help with lateral movement, credential mapping, and other technical steps that supported the breach. A custom 17,550-line Python script then funneled stolen data through OpenAI’s API, generating 2,597 structured intelligence reports across 305 internal servers. 

The stolen material reportedly included tax records, voter information, employee credentials, and other sensitive government data. Beyond the scale of the theft, the bigger problem is what this means for defense teams: AI can shorten the time needed to find weaknesses, write exploits, and organize stolen data. That compression makes traditional detection and response windows much harder to meet. 

This case shows that cybercriminals no longer need large teams to mount sophisticated operations. With the right prompts, a single attacker can use commercial AI systems to plan, automate, and scale an intrusion in ways that were once reserved for advanced groups. Anthropic said it investigated, disrupted the activity, and banned the accounts involved, but the broader lesson is clear: security defenses now need to account for AI-accelerated attacks as a mainstream threat.

ChipSoft Ransomware Incident Disrupts Dutch Healthcare Systems And Hospital Operations

Early in April, a ransomware incident struck ChipSoft, a Dutch firm supplying healthcare software. Hospitals relying on its systems faced major interruptions. Some had to go offline - cutting access to essential tools. Instead of regular operations, backup plans took over. When providers like ChipSoft fall victim, ripple effects hit care delivery hard. This event highlights how vulnerable medical networks can be through supplier weak points.  


After the event, Z-CERT - the Dutch agency for health sector cyber safety - has coordinated alongside ChipSoft and impacted facilities to evaluate risks, share actionable insights, meanwhile aiding restoration steps. Updates are still being tracked while medical services adapt to disruptions unfolding across systems. To prevent further risk, ChipSoft blocked entry to major platforms like Zorgportaal, HiX Mobile, and Zorgplatform. 

Because hospitals rely on these tools for handling medical records and daily operations, the outage caused serious disruptions. Service recovery is now unfolding step by step, with fresh login details being sent out alongside updates. Among affected sites, 11 hospitals cut access to ChipSoft tools mid-operation - network disconnection became a fast response. Connections through protected vendor-linked tunnels faced shutdowns on guidance from cybersecurity teams. 

Though halting some digital pathways slowed danger spread, care routines stumbled briefly at various locations. Outages hit multiple medical centers - Sint Jans Gasthuis, Laurentius Hospital, VieCuri Medical Center, and Flevo Hospital among them. Even so, treatment did not break down. Extra staff appeared at support stations because digital tools failed. Phone lines opened wider under pressure. When systems went quiet, people stepped in, swapping screens for spoken updates. Care moved forward, hand over hand. 

So far, officials report uninterrupted critical healthcare operations, thanks to workable backup strategies reducing disruption. While probes continue, nothing yet points to leaked personal health records. Still, monitoring remains active across systems. Still unknown is who launched the attack, yet no known ransomware collective has stepped forward. At times throughout recovery efforts, access to ChipSoft’s internal platforms - including its public site - was blocked, showing how deep the impact ran. 

From within the supplier’s infrastructure the compromise likely began, which triggered protective steps among client organizations. Security worries after the breach have slowed things down elsewhere too. Though planned earlier, rolling out the updated patient records software at Leiden University Medical Center now faces postponement - ChipSoft’s system caught in the ripple effects. 

This occurrence underscores an ongoing pattern in digital security: hospitals continue facing heightened risks because disruptions to care carry serious consequences, demanding swift fixes. When core technology suppliers suffer breaches, ripple effects spread through interconnected systems, worsening damage far beyond one location. 

Still working through recovery, teams from Z-CERT alongside medical facilities aim to bring systems back online without harming patient services. Because of the ChipSoft ransomware event, attention has shifted toward building tougher defenses, spotting threats earlier, with more reliable safeguards woven into health sector networks.

Surge in Digital Fraud Prompts Consumer Reports to Issue Safety Guidance


 

By incorporating digitally mediated communication into nearly every aspect of modern life, digital media has fundamentally reshaped the way individuals interact, transact, and manage daily responsibilities, adding convenience to nearly every aspect of life. However, this same interconnected infrastructure has also broadened cybercriminal attack surfaces. 

Increasing communication channels, such as voice networks, social platforms, and messaging platforms, have led to an increase in fraud activity and sophistication. In addition to occasional phishing emails, persistent, multi-channel intrusion attempts have been developed that exploit user trust, behavior, and familiarity with platforms. 

Digital fraud is a systemic risk that is characterized by the exploitation of technological interfaces to exploit financial assets, sensitive data, and identity credentials, and has become a systemic risk in this context. According to the Consumer Cyber Readiness assessment for 2025, there is an extensive exposure rate, with nearly half of the surveyed individuals reporting a direct encounter with fraudulent schemes. 

Financial losses were a measurable component of these incidents, demonstrating the operational effectiveness of current threat models. Using a collaborative analysis conducted by consumer advocacy and cybersecurity organizations, the data also illustrates a shift in attack vectors as a result of these incidents. 

Fraud attempts are now primarily transmitted through digital channels, including email, social media, SMS, and messaging applications. Message-based fraud has experienced significant growth, with its share increasing significantly year over year, reflecting both higher user engagement on these platforms and the relative ease with which attackers can execute scalable campaigns. This trend has been confirmed by observations of threat actors, which indicate that text-based scams generate substantial illegal revenue streams alone.

Even though technology providers are implementing enhanced safeguards and detection mechanisms within their ecosystems, these controls are subject to inherent limitations. The prevention of digital fraud increasingly requires user awareness, behavioral vigilance, and proactive security practices tailored to an evolving threat environment, as well as heightened awareness and behavioral vigilance. 

Digital fraud in the Indian landscape has become even more intensified, as scale and frequency are combined to create sustained financial and psychological pressure on consumers against such a global backdrop. In recent years, fraudulent communication has become a persistent operational risk within the digital economy, as opposed to an isolated incident. 

A successful fraud attack is not only financially severe but also extremely efficient, as threat actors often compress the fraud lifecycle into a few minutes by reporting loss patterns. In conjunction with the high interaction rate among recipients of suspicious messages, this acceleration indicates an active behavioral gap exploited by adversaries. 

Through digital adoption, a larger attack surface is made available across payments, social platforms, and mobile-first services, leading to more targeted and context-aware fraud campaigns. As a consequence of the rapid evolution of attack methodologies, conventional phishing tactics are increasingly being supplemented by artificial intelligence-driven deception techniques, such as synthetic media and voice impersonation, compounding this challenge. 

Furthermore, these tools enhance credibility at large scale, making detection more difficult for the typical user. This illustrates the continuing disparity between technological sophistication and the level of user readiness. Institutional response initiatives, such as awareness programs and reporting frameworks, are gaining momentum, yet they are often operating reactively in an environment defined by continuous innovation characterized by continuous threat. 

Unless parallel advances are made in consumer education, real-time threat intelligence, and adaptive regulatory measures, the economic and systemic consequences of digital fraud will continue to hinder the country's digital growth ambitions. It is imperative that practical safeguards at the user level remain a critical line of defense in this increasingly complex threat environment. 

In Consumer Reports, the importance of utilizing native security features embedded into modern smartphones is highlighted, which are designed to detect and filter potentially malicious communication immediately. This first line of defense against high volume scam attempts is provided by these controls, whether they are advanced message filtering capabilities on iOS devices or automated spam detection within Android-based messaging platforms. 

The report recommends, however, that independent verification is necessary before initiating any financial transaction, particularly in scenarios involving urgency and emotional distress, which are common tactics used by impersonation-based fraudsters. Technical safeguards alone are not sufficient without disciplined user behavior.

By cross-checking requests using alternate communication channels, users can reduce the possibility of compromised accounts and deceptive communication. In addition, it is essential to use digital payment applications cautiously, since, despite their efficiency, they frequently lack the robust fraud prevention frameworks associated with traditional banking instruments. Because such platforms are not mandated to provide reimbursement mechanisms, users have a greater responsibility for due diligence. 

Due to this, it is recommended that financial transactions be conducted only between verified and trusted recipients, and that higher-risk payments be made through a more secure and regulated channel, such as credit-backed transactions or direct bank transfers. 

The combined measures demonstrate a broader reality in a digitally adversarial digital environment. Ultimately, resilience to digital fraud depends on a combination of technological controls, informed user judgment, and proactive risk mitigation within an increasingly adversarial digital environment.

AI Scams Are Becoming Harder to Detect — 7 Warning Signs You Should Watch Closely

 



Artificial intelligence is not only improving everyday technology but also strengthening both traditional and emerging scam techniques. As a result, avoiding fraud now requires greater awareness of how these schemes are taking new shapes.

Being able to identify scams is an essential skill for everyone, regardless of age. This is especially important as AI tools continue to advance rapidly, contributing to a noticeable increase in reported fraud cases. According to the Federal Bureau of Investigation’s 2025 Internet Crime Report, complaints linked to cryptocurrency and artificial intelligence ranked among the most financially damaging cybercrimes, with total losses approaching $21 billion. The agency also highlighted that, for the first time in its history, its Internet Crime Complaint Center included a dedicated section on artificial intelligence, documenting 22,364 cases that resulted in losses of nearly $893 million.

These scams are increasingly convincing. AI can generate realistic emails and replicate human voices through audio deepfakes, making fraudulent communication difficult to distinguish from legitimate interactions. Because of this, such threats should be treated as ongoing and persistent risks.

Protecting yourself, your family, and your finances requires both instinct and awareness. By training both your attention to detail and your ability to listen carefully, you can better identify suspicious activity. Below are seven warning signs that can help you recognize AI-driven scams and avoid serious consequences.

1. Messages that feel unusually personalized

AI can gather publicly available details, including your job, interests, or recent purchases, to create messages that appear tailored specifically to you. While these messages may seem accurate, they can still contain subtle errors or incorrect assumptions about your life, which should raise concern.


2. Requests that create urgency

Scammers often attempt to rush you with statements such as warnings that your account will be locked, demands for immediate payment, or requests for login credentials to restore access. This pressure is designed to force quick decisions without careful thinking.


3. Messages that appear overly polished

Unlike older scams filled with spelling or grammar mistakes, AI-generated messages are often clear and well-written. However, phrases like “confirm your information to avoid cancellation” or “we noticed unusual activity” should still be treated cautiously, especially if accompanied by suspicious visuals or a lack of supporting detail.


4. Audio that sounds slightly unnatural

Voice-cloning technology can imitate people you know, making phone-based scams more believable. Still, these voices may reveal themselves through unnatural pacing, limited emotional variation, or requests that seem out of character for the person being impersonated.


5. Deepfake videos that seem real but contain flaws

AI can also generate convincing videos of colleagues, family members, or even public figures. These may appear during video calls, workplace interactions, or through compromised social media accounts. Warning signs include inconsistent lighting, unusual shadows, or subtle distortions in facial movement.


6. Attempts to move conversations across platforms

Scammers may begin communication through email or professional platforms and then attempt to shift the interaction to messaging apps, payment platforms, or other channels. This tactic, often supported by chatbot-driven conversations, is used to appear credible while avoiding detection.


7. Unusual or suspicious payment requests

Requests for payment through gift cards, wire transfers, or cryptocurrency remain a major red flag. These methods are difficult to trace and are frequently used in fraudulent schemes, regardless of how legitimate the request may initially appear.


Why awareness matters

While AI has not changed the underlying tactics of scams, it has made them far more refined and scalable. Techniques such as impersonation, urgency, and trust-building are now enhanced through automation and data-driven personalization.

As these technologies continue to become an omnipresent aspect of our lives and keep developing, the risk will proportionately grow. Staying cautious, verifying unexpected requests, and sharing this knowledge with friends and family are critical steps in reducing exposure.

In a digital environment where scams increasingly resemble genuine communication, recognizing these warning signs remains one of the most effective ways to stay protected.

Hackers Put 8.3 Million U.S. Crime Tip Records Up for Sale, Raising Security Fears

 

Cybercriminals behind a massive data breach involving 8.3 million crime tip records are now attempting to sell the stolen information for $10,000 in cryptocurrency.

The compromised data includes confidential tips submitted to numerous Crime Stoppers programs run by law enforcement agencies across the United States. It also extends to inputs shared with certain U.S. military units and even educational institutions.

The sale listing, discovered on an underground cybercrime forum, highlights the severity of the breach linked to cloud-based intelligence firm P3 Global Intel. The exposed dataset reportedly contains highly sensitive personal information about individuals identified in tips, including names, email addresses, dates of birth, phone numbers, home addresses, license plate details, Social Security numbers, and even criminal records. In some cases, the leak also reveals identities and details of informants, potentially putting them at risk of retaliation.

Security analysts have previously warned that the breach could have broader implications, including threats to national security, as it involves information shared with military and federal entities.

The dataset—referred to as “BlueLeaks 2.0” by nonprofit transparency group DDoSecrets—spans decades of records, from February 1987 through November 2025. It was allegedly stolen late last year by a hacking group calling itself INTERNET YIFF MACHINE and later shared with media outlet Straight Arrow News and DDoSecrets.

In a statement, a member of the hacker group confirmed responsibility for putting the data up for sale.

“It’s truly not something I want to do and it goes against my principles,” the hacker said. “However, it was out of necessity. Principles are for the well-fed, and I’m unfortunately not in a great place.”

When asked about potential buyers, the hacker indicated that interest had already been shown.

“I assume this will likely attract customers related to fraud, extortion, or at worst, finding and targeting informants,” they said. “Again, this isn’t something I feel good about doing, but it’s necessary.”

The individual also noted that they intend to sell the dataset to only one buyer.

Experts warn the consequences could be severe. Mailyn Fidler, assistant professor at the University of New Hampshire Franklin School of Law, previously stated that if such data becomes widely accessible, it could result in “severe harm and even death to police informants.”

P3 Global Intel’s parent company, Navigate360, has not commented on the reported sale. Earlier, CEO JP Guilbault stated that a third-party forensic investigation had been launched to determine the scope of the incident.

“To this point, we have not confirmed that any sensitive information has been accessed or misused,” Guilbault said at the time.

Since then, no further updates have been released, and the company’s services continue to operate. However, some agencies have taken precautionary steps. The Portland Police Bureau in Oregon recently urged residents to temporarily refrain from submitting tips to its Crime Stoppers program while the situation is being assessed.

Bengaluru Businessman Duped of Rs 15.45 Crore in Fake CBI 'Digital Arrest' Scam

 

A Bengaluru businessman, Ajit Gopalakrishna Saraf from Belagavi, fell victim to a sophisticated cyber fraud orchestrated by imposters posing as Central Bureau of Investigation (CBI) officials, resulting in a staggering loss of Rs 15.45 crore. The scam unfolded through a single phone call that escalated into a prolonged "digital arrest," exploiting the victim's fear of legal repercussions. Reported on April 11, 2026, by NDTV, this incident highlights the growing menace of impersonation frauds targeting professionals in India's tech hub. 

The ordeal began when Saraf received a call from a fraudster masquerading as CBI Director K. Subramanyam. The caller alleged that two SIM cards registered in Saraf's name were linked to Jet Airways founder Naresh Goyal, who had been arrested. Further, the scammer claimed investigations revealed Saraf had laundered Rs 25 lakh from his Canara Bank account in association with Goyal, earning a commission, and threatened immediate arrest unless he cooperated.

Under intense psychological pressure, Saraf endured a "digital arrest," where fraudsters kept him confined virtually, coercing compliance through threats of imprisonment. Panicked, he transferred Rs 15.45 crore via multiple Real Time Gross Settlement System (RTGS) transactions from February 7 to March 9, 2026, draining his life savings. Police noted the victim's compliance stemmed from sustained manipulation, a hallmark of such scams. 

Realizing the deception, Saraf approached Bengaluru's Cyber Crime Police Station to file a complaint, triggering an investigation. Authorities identified at least 10 primary beneficiary bank accounts spread across Hyderabad, Delhi, Punjab, Haryana, Gujarat, and West Bengal, pointing to an organized inter-state cybercrime syndicate. Efforts are ongoing to trace the perpetrators, freeze accounts, and recover funds.

This case underscores the rising threat of "digital arrest" scams in Bengaluru, where fraudsters impersonate agencies like CBI to extract huge sums. Victims often face months of surveillance via calls or video, as seen in similar incidents like a techie's Rs 32 crore loss.Authorities urge verifying official communications directly and reporting suspicions immediately to curb these networks.

Physical AI Talent War Drives Salaries Surge Across Robotics And Autonomous Vehicle Industry

 

Salaries climb fast as demand surges for experts who blend AI know-how with hands-on hardware skills. Firms in robotics, military tech, and self-operating machines now pay between three hundred thousand and five hundred thousand dollars just to attract top people. That surge comes on the heels of earlier fights for workers during the driverless car push, when even big names had trouble pulling in talent. Waymo once set the bar high - now others chase it harder than before. Pressure builds not because of trends, but due to how few can actually bridge software brains with real-world devices. 

Competition doesn’t slow - it spreads, fueled by what very few offer. What drives this wave of hiring is the need for people able to connect classic robotics with current AI tools. Such individuals must build and roll out smart systems that work in many areas - humanoid machines, factory automation, self-driving lift trucks, plus equipment found in farming, mining, and building sites. Because these jobs involve high-level challenges, skilled workers have become highly sought after; rivalry now stretches beyond new tech firms to include long-standing car makers too. 

Now stepping into a sharper spotlight, defense tech companies attract skilled professionals more aggressively than many peers - backed by steady financial support from organizations including the U.S. Department of Defense. Because these firms propose better pay, workers once aimed at self-driving car ventures shift direction, nudging auto industry players and new entrants alike toward rethinking how they hire and reward staff. Positions like AI enablement engineers and applied AI researchers see intense demand; such roles feed straight into building advanced smart technologies. While quiet on the surface, movement beneath reshapes where expertise flows. 

A shift in talent demand could reshape parts of the auto industry. Those focusing on driverless systems might lose key staff, possibly stalling progress. Firms new to the field may have to find more money or use what they have more carefully just to keep up. Some investors are moving fast - one backer gathered well over a billion dollars to support emerging hardware-driven AI ventures. Growth in this space seems tied closely to who can attract and hold technical experts. Money flows follow where specialists choose to work. 

What lies ahead isn’t just about filling roles - industries are shifting as firms move past self-driving cars toward what some call physical AI. These efforts stretch into areas like military tech, factory robots, and new kinds of transport machinery. Firms like Hermeus, having secured major capital lately, show where money is going: complex builds that tie artificial intelligence to real-world hardware. Growth now hinges less on software alone, more on machines that act in space. Quiet progress reshapes entire sectors without loud announcements. Capital follows builders who merge circuits with movement. 

Now that the field has grown older, fighting for skilled workers plays a central role in where it heads next. Winning trust and keeping sharp minds depends on which organizations manage operations at scale using actual AI systems today. Because need keeps climbing while available experts stay few, hardware-linked AI skill shortages persist - pointing toward lasting changes in how firms assess and pursue tech talent. Though time passes, pressure does not ease.

Uffizi Cyber Incident Serves as a Warning for Europe’s Cultural Sector

 


The cyber intrusion at the Uffizi Galleries in early 2026 has quickly evolved from an isolated security lapse into a case study of systemic digital exposure within Europe’s cultural infrastructure. One of the continent’s most prestigious custodians of artistic heritage, the institution disclosed that attackers succeeded in extracting its photographic archive an asset of both scholarly and operational value before containment measures were enacted.

Although restoration from secured backups ensured continuity of operations, the incident has sharpened attention on how legacy systems, often peripheral to core modernization efforts, can quietly become high-risk vectors within otherwise well-defended environments. Subsequent forensic assessments indicate that the breach was neither abrupt nor opportunistic.

Investigative timelines trace initial compromise activity as far back as August 2025, suggesting a calculated persistence campaign rather than a single-point intrusion. The suspected entry vector was an overlooked software component responsible for handling low-resolution image flows on the museum’s public-facing infrastructure an element deemed non-critical and therefore excluded from rigorous patch cycles. This miscalculation enabled attackers to establish a stable foothold, from which they executed disciplined lateral movement across interconnected systems spanning the Uffizi complex, including Palazzo Pitti and the Boboli Gardens.

Operating under a low-and-slow exfiltration model, the actors deliberately avoided triggering conventional detection thresholds, transferring data incrementally over several months. By the time administrative servers exhibited disruption, the extraction phase had largely concluded underscoring a level of operational maturity that challenges traditional assumptions about breach visibility and response timelines. 

Beyond its digital architecture, the Uffizi Galleries safeguards some of Italy’s most iconic works, including The Birth of Venus and Primavera by Sandro Botticelli, alongside Doni Tondo by Michelangelo a cultural weight that amplifies the implications of any security compromise. 

Institutional statements have sought to contextualize the operational impact, indicating that service disruption was limited to the restoration window required for backup recovery, with public disclosure issued post-incident in line with internal verification protocols. 

Reports circulating in Italian media suggested that threat actors had extended their reach across interconnected sites, including Palazzo Pitti and the Boboli Gardens, briefly asserting control over the photographic server and issuing a ransom demand directly to director Simone Verde. 

However, the institution maintains that comprehensive backups remained intact and that parallel developments such as restricted access to sections of Palazzo Pitti and the temporary relocation of select valuables to the Bank of Italy were pre-scheduled measures linked to ongoing renovation cycles rather than reactive security responses.

Similarly, the transition from analogue to digital surveillance infrastructure, initially recommended by law enforcement in 2024, was accelerated within a broader risk recalibration framework influenced in part by high-profile incidents such as the Louvre Museum theft case. 

The convergence of these events including the recent theft of works by Pierre-Auguste Renoir, Paul Cézanne and Henri Matisse from a northern Italian museum reinforces a broader pattern in which physical and cyber threats are increasingly intersecting, demanding integrated security postures across Europe’s cultural institutions. 

The reference to the Louvre Museum is neither incidental nor rhetorical. On 19 October 2025, a highly coordinated physical breach exposed critical lapses in on-site security when individuals, posing as construction workers, accessed restricted areas via a freight lift, breached a second-floor entry point, and removed multiple pieces of the French Crown Jewels within minutes.

Subsequent findings from a Senate-level inquiry pointed to systemic deficiencies, including limited CCTV coverage across exhibition spaces, misaligned external surveillance equipment, and fundamentally weak access controls at the credential level. The incident, which ultimately led to the resignation of director Laurence des Cars in February 2026, remains unresolved, with the stolen artefacts yet to be recovered. 

Against this backdrop, the distinction drawn by the Uffizi Galleries becomes materially significant. Unlike the Louvre breach, the Uffizi incident remained confined to the digital domain, with no evidence of physical intrusion or compromise of exhibition assets. 

Public-facing operations, including ticketing systems and visitor access, continued uninterrupted, with the only measurable impact attributed to backend restoration processes following data recovery. Amid intensifying scrutiny, conflicting narratives have emerged regarding the scope of data exposure. 

Reporting referenced by Cybernews, citing local sources including Corriere della Sera, alleged that attackers exfiltrated operationally sensitive artefacts ranging from authentication credentials and alarm configurations to internal layouts and surveillance telemetry before issuing a ransom demand.

The Uffizi Galleries has firmly contested these assertions, maintaining that forensic validation has yielded no evidence supporting the compromise of architectural maps or restricted security schematics, and emphasizing that certain observational elements, such as camera placement, remain inherently visible within public-facing environments. 

From a technical standpoint, the institution reiterated that core security systems are logically segregated and not externally addressable, limiting the feasibility of direct remote extraction as described. While investigations indicate that threat actors may have leveraged interconnected endpoints—including workstation nodes and peripheral devices to incrementally profile the environment, officials stress that no physical assets were impacted and no confirmed data misuse has been established. 

The ransom communication, reportedly directed to director Simone Verde with threats of dark web exposure, further underscores the psychological dimension often accompanying such campaigns. Notably, precautionary measures observed in parallel such as temporary gallery closures and the transfer of select holdings to the Bank of Italy have been attributed to pre-existing operational planning rather than reactive containment. 

In the broader context of heightened sectoral vigilance following incidents like the breach-linked vulnerabilities exposed at the Louvre Museum, the Uffizi has accelerated its transition from analogue to digital surveillance infrastructure, aligning with law enforcement recommendations issued in 2024. 

In its final clarification, the Uffizi Galleries moved to separate speculation from confirmed facts. While it did not deny that some valuables had been temporarily moved to a secure vault at the Bank of Italy, officials stressed that this step was part of planned renovation work, not a response to the cyber incident.

Reports from Corriere della Sera about sealed doors and restricted staff communication were also addressed, with the museum explaining that certain closures were linked to long-pending fire safety compliance and structural adjustments required for a historic building of its age. 

On the technical front, the Uffizi confirmed that its photographic archive remained safe, clarifying that although the server had been taken offline, it was done to restore data from backups a process now completed without any loss.

Despite the attention surrounding the breach, the museum continues to function normally, with visitor areas and ticketing operations unaffected, underlining how effective backup systems and planning helped limit real-world impact.

Phishing Cases Drop in Hong Kong, But Losses Surge as Scammers Turn to Account Takeovers

 

Phishing incidents in Hong Kong declined sharply last year, yet the financial damage caused by such scams rose significantly, according to police. While fewer cases were reported, the total amount lost by victims climbed to HK$110 million (US$14 million), highlighting a shift in cybercrime tactics.

Authorities recorded 1,093 phishing cases in 2025, a 60 per cent drop from 2,731 incidents the previous year. Despite this decline, overall losses jumped by 112.9 per cent, with the average loss per case increasing more than four times to around HK$100,000. Police attributed this rise to increasingly sophisticated methods used by scammers, who are now focusing on gaining control of victims’ accounts instead of merely collecting credit card details.

“Previously, phishing links were sent aiming to obtain credit card information,” said acting senior superintendent Rachel Hui Yee-wai of the cyber security and technology crime bureau, adding that scammers would then simply use the information to make unauthorised purchases
“But in recent years, these links aim to take over accounts – they could be people’s securities accounts, online banking accounts or even WhatsApp accounts to go on and scam friends and family.”

In one example shared by authorities, a fraudster impersonated a WhatsApp administrator and asked a victim to provide a login verification code. The victim complied, unknowingly giving the scammer full access to the account.

“This effectively allowed scammers to take control … the victim basically handed the account over and let others view all the activity and content,” she said.

The attacker then leveraged the compromised account to conduct further scams, ultimately causing the victim to lose HK$19 million. Police noted that such incidents demonstrate how phishing schemes have evolved into more complex operations involving identity theft and social engineering.

Separately, a large-scale phishing simulation conducted by police revealed that employees across Hong Kong remain vulnerable to these attacks, especially when messages appear to originate internally. The exercise, carried out between October and January, involved 301 organisations and more than 53,000 participants who were unknowingly sent simulated phishing emails and SMS messages.

Results showed that 13.4 per cent of participants clicked on malicious email links, up from 11.5 per cent a year earlier. Among those who clicked, nearly half submitted personal information, while 6.4 per cent uploaded data or downloaded files. At least one employee in 89 per cent of participating organisations fell for a phishing email.

Senior staff were found to be more susceptible, with a click rate of 15.5 per cent compared with 13 per cent among general employees. Messages disguised as internal communications proved particularly effective. Emails posing as IT department notifications offering gifts had the highest click rate at 6.7 per cent, followed by file download alerts.

A separate SMS phishing test involving 3,620 participants showed a lower click rate of 5.9 per cent, though 70 per cent of organisations still had at least one employee engage with a malicious link. In real-world scenarios, SMS remains a dominant channel for scammers, accounting for over 90 per cent of phishing attempts, often masquerading as government agencies, banks, or courier services.

Police also highlighted the increasing use of artificial intelligence in crafting phishing attacks, enabling criminals to produce highly realistic messages and fake websites.

“They can use AI or other tools to make the website almost identical to the genuine one … even the logo is the same,” Hui said.

Officials warned that such advancements make it harder for individuals to identify fraudulent communications, particularly when combined with psychological tactics like urgent security alerts designed to lower suspicion.

Authorities said they will continue enhancing prevention and enforcement measures, including using AI to detect suspicious websites and collaborating with telecom providers to block scam messages. The public is advised to stay cautious, avoid clicking on unknown links, and verify requests for sensitive information through official sources.