Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Salesforce’s New “Headless 360” Lets AI Agents Run Its Platform

 


Salesforce has introduced what it describes as the most crucial architectural overhaul in its 27-year history, launching a new initiative called “Headless 360.” The update is designed to allow artificial intelligence agents to control and operate the company’s entire platform without requiring a traditional graphical interface such as a dashboard or browser.

The announcement was made during the company’s annual TDX developer conference in San Francisco, where Salesforce revealed that it is releasing more than 100 new developer tools and capabilities. These tools immediately enable AI systems to interact directly with Salesforce environments. The move reflects a deeper shift in enterprise software, where the rise of intelligent agents capable of reasoning and executing tasks is forcing companies to rethink whether conventional user interfaces are still necessary.

Salesforce’s answer to that question is direct: instead of designing software primarily for human interaction, the platform is now being rebuilt so that machines can access and operate it programmatically. According to the company, this transformation began over two years ago with a strategic decision to expose all internal capabilities rather than keeping them hidden behind user interfaces.

This shift is taking place during a period of uncertainty in the broader software industry. Concerns that advanced AI models developed by companies like OpenAI and Anthropic could disrupt traditional software business models have already impacted market performance. Industry indicators, including software-focused exchange-traded funds, have declined substantially, reflecting investor anxiety about the long-term relevance of existing SaaS platforms.

Senior leadership at Salesforce has indicated that the new architecture is based on practical challenges observed while deploying AI systems across enterprise clients. According to internal insights, building an AI agent is only the initial step. Organizations also face ongoing challenges related to development workflows, system reliability, updates, and long-term maintenance.

To address these challenges, Headless 360 is structured around three foundational pillars.

The first pillar focuses on development flexibility. Salesforce has introduced more than 60 tools based on Model Context Protocol, along with over 30 pre-configured coding capabilities. These allow external AI coding agents, including systems such as Claude Code, Cursor, Codex, and Windsurf, to gain direct, real-time access to a company’s Salesforce environment. This includes data, workflows, and underlying business logic. Developers are no longer required to use Salesforce’s own integrated development environment and can instead operate from any terminal or external setup.

In addition, Salesforce has upgraded its native development environment, Agentforce Vibes 2.0, by introducing an “open agent harness.” This system supports multiple agent frameworks, including those from OpenAI and Anthropic, and dynamically adjusts capabilities depending on which AI model is being used. The platform also supports multiple models simultaneously, including advanced systems like Claude Sonnet and GPT-5, while maintaining full awareness of the organization’s data from the start.

A notable technical enhancement is the introduction of native React support. During demonstrations, developers created a fully functional application using React instead of Salesforce’s traditional Lightning framework. The application connected to Salesforce data through GraphQL while still inheriting built-in security controls. This significantly expands front-end flexibility for developers.

The second pillar focuses on deployment. Salesforce has introduced an “experience layer” that separates how an AI agent functions from how it is presented to users. This allows developers to design an experience once and deploy it across multiple platforms, including Slack, mobile applications, Microsoft Teams, ChatGPT, Claude, Gemini, and other compatible environments. Importantly, this can be done without rewriting code for each platform. The approach represents a change from requiring users to enter Salesforce interfaces to delivering Salesforce-powered experiences directly within existing workflows.

The third pillar addresses trust, control, and scalability. Salesforce has introduced a comprehensive set of tools that manage the entire lifecycle of AI agents. These include systems for testing, evaluation, monitoring, and experimentation. A central component is “Agent Script,” a new programming language designed to combine structured, rule-based logic with the flexible reasoning capabilities of AI models. It allows organizations to define which parts of a process must follow strict rules and which parts can rely on AI-driven decision-making.

Additional tools include a Testing Center that identifies logical errors and policy violations before deployment, custom evaluation systems that define performance standards, and an A/B testing interface that allows multiple agent versions to run simultaneously under real-world conditions.

One of the key technical challenges addressed by Salesforce is the difference between probabilistic and deterministic systems. AI agents do not always produce identical results, which can create instability in enterprise environments where consistency is critical. Early adopters reported that once agents were deployed, even small modifications could lead to unpredictable outcomes, forcing teams to repeat extensive testing processes.

Agent Script was developed to solve this problem by introducing a structured framework. It defines agent behavior as a state machine, where certain steps are fixed and controlled while others allow flexible reasoning. This approach ensures both reliability and adaptability.

Salesforce also distinguishes between two types of AI system architectures. Customer-facing agents, such as those used in sales or support, require strict control to ensure they follow predefined rules and maintain brand consistency. These operate within structured workflows. In contrast, employee-facing agents are designed to operate more freely, exploring multiple paths and refining their outputs dynamically before presenting results. Both systems operate on a unified underlying architecture, allowing organizations to manage them without maintaining separate platforms.

The company is also expanding its ecosystem. It now supports integration with a wide range of AI models, including those from Google and other providers. A new marketplace brings together thousands of applications and tools, supported by a $50 million initiative aimed at encouraging further development.

At the same time, Salesforce is taking a flexible approach to emerging technical standards such as Model Context Protocol. Rather than relying on a single method, the company is offering APIs, command-line interfaces, and protocol-based integrations simultaneously to remain adaptable as the industry evolves.

A real-world example surfaced during the announcement demonstrated how one company built an AI-powered customer service agent in just 12 days. The system now handles approximately half of customer interactions, improving efficiency while reducing operational costs.

Finally, Salesforce is also changing its business model. The company is shifting away from traditional per-user pricing toward a consumption-based approach, reflecting a future where AI agents, rather than human users, perform the majority of work within enterprise systems.

This transformation suggests a new layer in strategic operations. Instead of resisting the rise of AI, Salesforce is restructuring its platform to align with it, betting that its existing data infrastructure, enterprise integrations, and accumulated operational logic will continue to provide value even as software becomes increasingly autonomous.

Nvidia’s AI Launch Sparks Quantum Stock Surge, Minting Xanadu’s CEO a Billionaire

 

Quantum computing stocks jumped after Nvidia unveiled its Ising open-source AI model family, a move that investors interpreted as a strong validation of the sector. The result was a sharp rally in several names, with Xanadu standing out as the biggest winner and its founder Christian Weedbrook briefly joining the billionaire ranks.

The core issue is that Nvidia’s announcement did not introduce a new quantum computer; instead, it introduced software tools aimed at two of quantum computing’s hardest problems: calibration and error correction. Nvidia said Ising can make decoding up to 2.5 times faster and three times more accurate than pyMatching, which helped convince traders that the path to practical quantum systems may be improving faster than expected. 

That enthusiasm quickly turned into extreme stock moves. Xanadu’s shares climbed from under $8 to roughly $40 in six trading sessions, while the Toronto exchange paused trading several times because of the speed of the move. Similar gains appeared across the sector, including D-Wave, IonQ, Rigetti, Infleqtion, and Quantum Computing, showing that the market was bidding up the whole group rather than just one company. 

For Xanadu, the rally created an extraordinary paper windfall. Weedbrook owns 15.6% of the company through multiple voting shares, and his stake was valued at about $1.5 billion to $1.6 billion during the surge. The story is notable because the company’s valuation moved dramatically on sentiment tied to Nvidia’s broader endorsement of quantum-related tooling, not on a fresh commercial breakthrough from Xanadu itself. 

The main issue is that quantum computing remains a high-expectation, low-certainty industry. Nvidia’s move suggests that investors increasingly view AI and quantum as complementary technologies, especially if software can help make fragile quantum hardware more usable. But the volatility also highlights the risk: when a sector is still early and speculative, a single announcement can create massive gains, even before the business fundamentals fully catch up.

Tinder And Zoom Introduce World ID Iris Scanning To Verify Humans Amid Rising AI Fake Profiles

 

Now comes eye-scan tech on Tinder and Zoom, rolling out to confirm real people behind profiles amid rising fears about AI mimics and bots. This move leans on identity checks from World ID - backed by Tools for Humanity - to tell actual humans apart. Verification lights up through unique iris patterns, quietly working when someone logs in. Not every user sees it yet; testing shapes how widely it spreads. Behind the scenes, privacy safeguards aim to shield biometric data tightly. Shifts like these respond to digital trust gaps widening across social apps lately. Scanning begins at the iris, that ring of color in the eye, using either an app or a round gadget made for this purpose. After confirmation comes through, a distinct digital ID lands on the person's smartphone. 

This key travels with them, opening access wherever systems accept it to prove someone is human, not automated software. Rising floods of fake online personas built by artificial intelligence fuel efforts like this one. Impersonations crafted by deepfakes grow more common, pushing such verification into sharper focus. Backed by Sam Altman - also at the helm of OpenAI - the project made its debut in San Francisco. At the event, he suggested the web may soon be flooded with machine-made content more than human output. Truth online might hinge on tools able to tell actual humans apart from artificial ones. 

Such systems, according to him, are likely to grow unavoidable. Fake accounts plague both Tinder and Zoom, complicating trust on these platforms. Driven by artificial intelligence, counterfeit profiles on Tinder deploy synthetic photos alongside prewritten messages. These setups often unfold into romantic deception aimed at seizing cash or sensitive details. Reports indicate massive monetary damage worldwide due to similar frauds lately. Losses tally in the billions across nations within just a few years. 

Surprisingly, Zoom faces a distinct yet connected challenge - deepfake-driven impersonation at work. A well-documented incident saw fraudsters deploy synthetic audio and video to mimic corporate leaders, tricking staff into sending large sums. Here, World ID steps in, adding stronger verification when stakes run high. Later came iris scans, after Match Group already introduced video selfies to fight fake profiles on Tinder. Though not required, this newer check offers a tougher way to prove who you really are. People at the company say it helps users feel more certain about others’ real identities. 

What matters most is trust during interactions. Because irises differ so much between people, World ID uses them as a key part of its method. This setup aims to protect user privacy by creating an individual code instead of keeping sensitive data like home locations or full names. Even though it does not collect traditional identity markers, the technology still confirms real individuals. Growth has been steady, with expanding adoption seen on various digital services. 

A large number of people - already in the millions - have gone through the sign-up process. Now shaping how we confirm who's behind a screen, artificial intelligence pushes biometrics deeper into everyday applications. Though concerns linger about data safety and user acceptance, this trend mirrors wider attempts across tech sectors to tackle rising confusion between real people and sophisticated automated fakes. Despite hesitation in some areas, systems that verify physical traits gain ground as tools for clearer online identities.

Fake CAPTCHA Lures Power IRSF Fraud and Crypto Theft Campaigns


 

Research by Infoblox reveals a new fraud operation that combines routine web security practices with telecom billing abuse, resulting in unauthorized mobile activity by using counterfeit CAPTCHA interfaces. 

In this scheme, familiar human verification prompts are repurposed as covert triggers for International Revenue Share Fraud, effectively converting a typical browser interaction into an event that is monetized through telecom billing. 

Several studies have demonstrated that users who navigate what appears to be a legitimate verification process may unknowingly authorize premium or international SMS transmissions, creating a direct revenue stream for threat actors. 

IRSF has presented challenges to telecom operators for decades, but this implementation introduces a previously undetected delivery vector that takes advantage of user trust in widely used web validation mechanisms in order to accomplish the delivery. 

While individual charges may appear insignificant, the cumulative impacts at scale present carriers with measurable financial exposure, along with an increase in customer disputes resulting from opaque and unrecognized billing activity. 

Based on the analysis, it appears that the campaign has been operating since mid-2020, resulting from a sustained and carefully developed exploitation approach. Through the utilization of classic social engineering techniques as well as browser manipulation tactics, including back-button hijacking, the infrastructure effectively limits user navigation and reinforces the illusion of a legitimate verification process. 

In addition, dozens of originating numbers were identified in multiple international jurisdictions, emphasizing the geographical dispersion of the monetization layer underpinning the scheme. The staged CAPTCHA sequence is particularly designed to trigger multiple outbound SMS events silently, routing messages to a variety of premium-rate destinations in place of a single endpoint, thus maximizing revenue generation per interaction by triggering multiple SMS events.

A delay in the manifestation of associated charges which often occurs weeks after the event—obscures attribution further, reducing the possibility of user recalling or disputing the charges at bill time. In particular, the integration of malicious traffic distribution systems within this operation is significant, as is the repurposing of infrastructure typically utilized for malware delivery and phishing redirection into SMS fraud orchestration in a high volume. 

Threat actors can scale a campaign efficiently while maintaining operational stealth by utilizing layers of redirection and evasion mechanisms through this convergence. These findings have led to the discovery of a highly orchestrated, multi-phase fraud scheme that combines behavioral manipulation with telemarketing monetization. 

By utilizing a pool of internationally distributed numbers - many of which are registered in regions with higher SMS termination costs, including Azerbaijan, Egypt, and Myanmar - the operation maximizes per transaction yields.

It is common practice for victims to be funneled through a series of convincing CAPTCHA challenges that are intended to trigger outbound messaging events to numerous premium-rate destinations discreetly, often resulting in several SMS transmissions within the same session. This layered interaction model, strengthened by browser-level interference, such as history manipulation, prevents users from leaving the website while maintaining the illusion that the application is legitimate. 

In this fraud model, the threat actor exploits inter-carrier settlement mechanisms to route traffic toward high-fee endpoints under revenue-sharing arrangements by leveraging inter-carrier settlement mechanisms. Moreover, the integration of traffic distribution systems provides an additional level of operational precision, allowing targeted victimization while dynamically concealing malicious infrastructure from detection systems. 

Based on industry assessments, artificially inflated traffic associated with such schemes remains among the most financially damaging types of messaging abuse, as significant portions of telecom operators report both elevated traffic volumes as well as significant revenue leaks associated with such schemes. 

Individual users' seemingly trivial costs aggregate into a scalable and persistent revenue stream within this context, demonstrating the ongoing viability of IRSF to serve as a global fraud vector. Detailed investigations conducted by Infoblox and Confiant further illustrate how Keitaro Tracker abuse has enabled large-scale fraud ecosystems by acting as an enabler.

It was originally designed as a self-hosted ad performance tracking tool, but its conditional routing capabilities have been systematically repurposed by threat actors, who often operate with illegally obtained or cracked licenses, as a covert traffic distribution system and cloaking tool. By misusing this information, victims are diverted from seemingly legitimate entry points, such as sponsored social media advertisements, to fraudulent investment platforms claiming to be AI-driven and guaranteed high returns. 

As a method of enhancing credibility and engagement, campaigns frequently employ fabricated media narratives, including spoofed news coverage, synthetic endorsements, and deepfake video content attributed to actors such as FaiKast. In a four-month observation period, telemetry indicates more than 120 discrete campaigns were deployed in conjunction with Keitaro-linked infrastructure, resulting in significant DNS activity across thousands of domains. 

The majority of this traffic has been attributed to cryptocurrency-related fraud, particularly wallet draining schemes disguised as promotional airdrops involving widely recognized blockchain services and assets. 

The convergence of legacy investment scam tactics with adaptive traffic orchestration and artificial intelligence-based deception techniques demonstrates how scalable infrastructure is intertwined with persuasive social engineering to ensure maximum reach and financial extraction in an evolving threat landscape.

In terms of execution, the scheme contains carefully optimized conversion funnels that maximize engagement as well as monetization. The typical interaction sequence, which consists of multiple CAPTCHA stages, can result in as many as 60 outbound SMS messages to a distributed network of international phone numbers, resulting in an additional charge of around $30 per session for each outbound SMS message. 

Although this cost model is modest when considered individually, it scales well across large victim pools when replicated, especially in countries with high- and mid-level termination rates across Europe and Eurasia. It is possible to further refine the campaign logic through client-side state management, which uses cookies, which track progression metrics such as “successRate” and dynamically determine user pathways.

By selectively advancing, redirecting, or filtering participants into parallel fraud streams, adaptive routing improves targeting precision while fragmenting detection efforts by distributing traffic among multiple controlled endpoints, which increases detection efficiency. 

Additionally, browser manipulation techniques, specifically JavaScript-driven history tampering, continue to be used, thereby ensuring persistence by redirecting users back into the fraudulent flow upon attempt to exit through standard navigation controls. 

As a result, the user is faced with a constrained browsing environment that prolongs interaction time and increases the possibility of repeating chargeable events before disengaging. Overall, the operation illustrates a shift in fraud engineering as telecom exploitation, adaptive web scripting, and traffic orchestration are converged into a unified, revenue-generating system. 

By embedding monetization triggers within seemingly benign user interactions, and by reinforcing those triggers with persistence mechanisms, such as cookie-driven logic and navigation controls, threat actors are successfully industrializing high volume, low value fraud. According to Information Blox, these campaigns are not only technically sophisticated, but also exploit systemic gaps in web platforms, advertising networks, and telecom billing frameworks. 

Increasingly, these tactics have become more sophisticated, and they require more coordinated mitigation in addition to detection, so tighter controls across digital advertising supply chains, improved browser-level safeguards, and greater transparency regarding cross-border messaging charges will be required to limit the scaleability of such abuses.

Can AI Own Its Work? A Debate That Started With a Monkey Photo

 



A single photograph captured in a remote forest over a decade ago has become central to one of the most complex legal questions of the digital age: what happens when creative work is produced without direct human authorship? The answer now carries long-term consequences for artificial intelligence, creative industries, and ownership rights in the modern world.

The image in question originated in 2011, when wildlife photographer David Slater was documenting crested black macaques in Indonesia. These monkeys are not only endangered but also known for their highly expressive faces, making them attractive subjects for photography. However, Slater faced difficulty capturing close-up shots because the animals were wary of human presence.

To work around this, he positioned his camera on a tripod, enabled automatic focus, and used a flash, allowing the monkeys to approach and interact with the equipment without feeling threatened. His approach relied on curiosity rather than control. Eventually, one macaque handled the camera and pressed the shutter button while looking directly into the lens. The resulting image, widely known as the “monkey selfie,” appeared almost intentional, with the animal’s expression resembling a posed portrait.

While the photograph initially brought attention and recognition, it soon triggered an unexpected legal dispute. The core issue was deceptively simple: if a photograph is not taken by a human, can anyone claim ownership over it?

The situation escalated when the image was uploaded to Wikipedia, making it freely accessible worldwide. Slater objected to this distribution, arguing that he had lost approximately £10,000 in potential earnings because the image could now be used without payment. However, the Wikimedia Foundation refused to remove the photograph. Its reasoning was based on copyright law, which generally requires a human creator. Since the image was captured by an animal, the organisation classified it as public domain material.

This interpretation was later reinforced by the U.S. Copyright Office, which formally clarified that works produced without human authorship cannot be registered. In its guidance, the office explicitly listed a photograph taken by a monkey as an example of ineligible material, establishing a clear precedent.

The dispute took another unusual turn when People for the Ethical Treatment of Animals filed a lawsuit attempting to assign copyright ownership to the macaque itself. Although framed as a legal claim over the photograph, the case was widely interpreted as an effort to establish broader legal rights for animals. After several years of legal proceedings, a court dismissed the case, concluding that animals do not have the legal capacity to initiate lawsuits.

Legal experts later observed that, although the case focused on animal authorship, it introduced a broader conceptual challenge that would become more relevant with the rise of artificial intelligence. According to intellectual property lawyer Ryan Abbott, the debate could easily extend beyond animals to machines capable of producing creative outputs.

This possibility became reality when computer scientist Stephen Thaler attempted to secure copyright protection for an image generated by his AI system, DABUS. Thaler described the system as capable of independently producing ideas, arguing that it should be recognised as the sole creator of its output. He characterised the system as exhibiting a form of machine-based cognition, though this view is strongly disputed within the scientific community.

Despite these claims, the Copyright Office rejected the application, applying the same reasoning used in the monkey selfie case. Because the work was not created by a human, it could not qualify for copyright protection. This rejection led to a legal challenge that progressed through multiple levels of the U.S. judicial system.

When the case reached the Supreme Court of the United States, the court declined to hear it, leaving lower court rulings intact. The outcome effectively confirmed that, under current U.S. law, works generated entirely by artificial intelligence cannot be owned by anyone, including the developer of the system or the individual who prompted it.

This position has reverberating implications for the creative economy. Copyright law exists to allow creators and organisations to control and monetise their work. Without ownership rights, it becomes difficult to build sustainable business models around fully AI-generated content. Legal scholar Stacey Dogan noted that this limitation reduces the likelihood of a future where machine-generated content completely replaces human-created media.

At the same time, the rapid expansion of generative AI tools continues to complicate the landscape. These systems function by analysing large datasets and producing outputs based on user instructions, often referred to as prompts. While they can generate text, images, and video at scale, their outputs raise questions about originality and authorship, particularly when human involvement is minimal.

Recent industry developments illustrate this uncertainty. Experimental AI-generated content has attracted large audiences online, suggesting a level of public interest, even if motivations such as novelty or criticism play a role. However, some technology companies have begun reassessing their AI content strategies, particularly where ownership and profitability remain unclear.

Expert opinion on the value of fully AI-generated content remains divided. Some specialists argue that such content lacks depth or authenticity, while others view AI as a useful tool for supporting human creativity rather than replacing it. This perspective positions AI as a collaborator rather than an independent creator.

Legal approaches also vary internationally. In the United Kingdom, copyright law allows ownership of computer-generated works by assigning authorship to the individual responsible for arranging their creation. However, this framework is currently being reconsidered as policymakers evaluate whether it remains appropriate in the context of modern AI systems.

One of the most complex unresolved issues involves hybrid creation. When humans actively guide, refine, and edit AI-generated outputs, determining ownership becomes less straightforward. A notable example involves an AI-assisted artwork that won a competition after extensive prompting and editing, raising questions about how much human contribution is required for copyright protection.

This debate is not entirely new. When photography first emerged, similar concerns were raised about whether cameras, rather than humans, were responsible for creative output. Over time, legal systems adapted by recognising the role of human intention and decision-making. Artificial intelligence now presents a more advanced version of that same challenge.

For now, the legal position in the United States remains clear: without meaningful human involvement, creative works cannot be protected by copyright. However, as AI becomes increasingly integrated into creative processes, the distinction between human and machine contribution is becoming more difficult to define.

What began as an unexpected interaction between a monkey and a camera has therefore evolved into a defining case in the global conversation about creativity, ownership, and technology. The decisions made in courts today will shape how creative work is produced, distributed, and valued in the future.



PhantomCore Exploits TrueConf Flaws to Breach Russian Networks

 

A pro-Ukrainian hacktivist group known as PhantomCore has been exploiting vulnerabilities in TrueConf video conferencing software to infiltrate Russian networks since September 2025. According to a Positive Technologies report, the attackers chained three undisclosed flaws in TrueConf Server, allowing them to bypass authentication, read sensitive files, and execute arbitrary commands remotely. Despite patches being released by TrueConf on August 27, 2025, the group independently reverse-engineered these issues, launching widespread attacks on Russian organizations without relying on public exploits. 

The vulnerabilities include BDU:2025-10114 (CVSS 7.5), an insufficient access control flaw enabling unauthenticated requests to admin endpoints like /admin/*; BDU:2025-10115 (CVSS 7.5), which permits arbitrary file reads; and the critical BDU:2025-10116 (CVSS 9.8), a command injection vulnerability for full OS command execution. This exploit chain grants attackers initial foothold on vulnerable servers, facilitating lateral movement and persistence within victim environments. 

PhantomCore's operations highlight their sophistication, as they maintain stealth for extended periods—up to 78 days in some cases—while targeting sectors like government, defense, and manufacturing. PhantomCore's tactics extend beyond TrueConf exploits, incorporating phishing with password-protected RAR archives containing PhantomRAT malware, a shift from earlier ZIP-based methods. Positive Technologies noted over 180 infections from May to July 2025 alone, peaking on June 30, with at least 49 hosts still under attacker control as of early 2026. The group's pro-Ukrainian affiliation aligns with geopolitical motives, focusing exclusively on Russian entities amid ongoing cyber-espionage waves. 

Organizations running TrueConf face heightened risks if unpatched, as attackers evolve tools to evade detection and conduct large-scale breaches. Immediate mitigations include applying the August 2025 patches, monitoring admin endpoints and command logs for anomalies, and segmenting video conferencing servers from core networks. Enhanced defenses against lateral movement, such as network micro-segmentation and behavioral analytics, are crucial to counter PhantomCore's persistence. 

This campaign underscores the dangers of unpatched collaboration tools in sensitive environments, where private zero-days can fuel nation-aligned hacktivism. Russian firms must prioritize vulnerability management and threat hunting, as PhantomCore's adaptability signals ongoing threats into 2026. By staying vigilant, defenders can disrupt such stealthy intrusions before they escalate to data exfiltration or sabotage.

ShinyHunters Targets McGraw Hill In Salesforce Data Leak Dispute Over Breach Scope

 

A breach at McGraw Hill came to light when details appeared on a leak page run by ShinyHunters, a hacking collective now seeking payment. Appearing online without warning, the listing suggested sensitive data had been taken. The firm acknowledged something went wrong only after outsiders pointed to the published claims. Instead of silence, there followed a brief statement - no elaborate explanations, just confirmation. What exactly was accessed remains partly unclear, though the criminals promise more leaks if demands go unmet. Their method? Take data first, then pressure victims publicly through exposure. 

Though the collective says it pulled around 45 million records from Salesforce setups, McGraw Hill challenges how serious the incident really was. A flaw in a cloud-based Salesforce setup - misconfigured, not hacked - led to what occurred, according to the company. Public release looms unless money changes hands by their stated date. Not a breach of core infrastructure, they clarify. Timing hinges on whether terms get fulfilled. What surfaced came via access error, not forced entry. 

Later came confirmation from the firm: only minor data sat exposed through a public page tied to Salesforce. Not part of deeper networks - systems handling daily operations stayed untouched. Customer records? Still secure. Educational material platforms? Unreached. Personal identifiers like income traces or school files showed no signs of exposure. The breach never reached those layers. A single weak link elsewhere might open doors wider than expected. Problems often start outside core networks, hidden in connected tools. 

One misstep in setup could ripple across several teams relying on Salesforce. When outside systems slip, sensitive details sometimes follow. Security gaps far from the main system still carry risk close to home. What seems distant can quickly become immediate. Even with those reassurances, ShinyHunters insists the breached records include personal details - setting their version against the firm’s own review. Contradictions like this often surface when attacks aim to extort, as hackers sometimes inflate what they took to push targets into responding. 

Now operating at a steady pace, ShinyHunters stands out within the underground scene by focusing less on locking files and more on quietly siphoning information. Instead of scrambling networks, they pressure victims using material already taken - payment demands follow exposure threats. Their name surfaced after breaches hit well-known companies, where leaked datasets served as leverage. Rather than causing immediate downtime, their power lies in what could be revealed. 

What stands out lately is how this group exploited a security gap at Anodet, an analytics company, gaining entry through leaked access tokens aimed squarely at cloud-based data systems. Alongside that incident came the public drop of massive corporate datasets - another sign their main goal remains pulling vast amounts of information from high-profile targets. Among recent breaches, the one involving McGraw Hill stands out - not because of its scale, but due to how it reveals weaknesses hidden within standard cloud setups. 

Instead of breaking through strong defenses, hackers often slip in via small errors made during setup steps handled by outside teams. What makes this case notable is less about immediate damage, more about what follows: sensitive information pulled quietly into unauthorized hands. While systems keep running without interruption, stolen data becomes the weapon - threatening public release unless demands are met. 

Over time, such tactics have shifted the focus of digital attacks away from crashes toward silent leaks. With probes still underway, one thing becomes clear: oversight of outside connections matters more now than ever. When digital intruders challenge what companies say, credibility hinges on openness. Tight rules around setup adjustments help reduce weak spots. How firms handle disclosures can shape public trust just as much as technical fixes. Clarity during crises often separates measured responses from confusion.

The Shift from Cyber Defense to Recovery-Driven Security


 

There has been a structural recalibration of cybersecurity strategies as organizations recognize that breaches impact operations, finances, and reputation in ways that extend far beyond the moment of intrusion. 

Incidents that once remained within the domain of IT are now affecting the entire organization, with containment cycles lasting up to months and remediation costs reaching tens of millions for large-scale breaches. 

Leaders in response are shifting their focus from absolute prevention to sustained operational continuity, recognizing that resilience is not defined by the absence of attacks, but rather by the capability of recovering quickly and precisely. 

The shift is driving a renewed focus on creating integrated cyber resilience frameworks that align business continuity objectives with security controls, ensuring critical systems remain recoverable even after active compromises. There is also a disconnect between security enforcement and operational accessibility resulting from this evolution. 

The cybersecurity function has historically prioritized perimeter hardening and strict authentication, whereas business operations demand uninterrupted data availability with minimal friction to operate. With increasing threat landscapes and competing priorities, these priorities are convergent, often revealing inefficiencies, in which layered authentication mechanisms, while indispensable, inadvertently delay recovery workflows and extend downtime during critical incidents.

By integrating adaptive intelligence and automation into Zero Trust architectures, this divide is beginning to be reconciled. The approach organizations are taking is to design environments where continuous verification is co-existing with streamlined restoration capabilities rather than treating security and recovery as opposing forces. 

Zero Trust, at its core, is a strategic model rather than a single technology that requires rigorous, context-aware authentication utilizing multiple data points prior to granting access. In combination with intelligent recovery systems, this approach is redefining resilience by enabling secure access without compromising recovery agility, resulting in high-assurance environments that are able to maintain operations even under persistent threat circumstances. 

With the increased sophistication of ransomware campaigns, conventional backup-centric strategies are revealing their limitations, as adversaries increasingly design attacks that extend beyond the initial system compromises. Threat actors execute long reconnaissance phases during many incidents, mapping enterprise environments, identifying high-value assets, and, critically, locating backups and undermining them before encrypting or destroying data.

By intentionally targeting a variety of entities, cybercrime has evolved into a coordinated and enterprise-like environment where operational disruption is designed to maximize leverage. Attackers effectively eliminate an organization's ability to restore from trusted states when they compromise recovery pathways, amplifying downtime and causing an increase in financial and regulatory risk. 

Due to this inevitability, forward-looking organizations are repositioning their security postures to reflect this inevitability, incorporating defensive controls into a more holistic security model that includes assured recoverability. As part of this approach, cyber resilience and cyber recovery are integrated, where the objective is to not only withstand intrusion attempts but to maintain data integrity, availability, and rapid restoration under adversarial circumstances. 

The modern cyber recovery architectures are reflecting these evolving threat dynamics by incorporating resilience as an integral part of their development, repositioning data protection from a passive safeguard to an active line of defense. Hardened recovery frameworks are becoming increasingly popular among organizations, which include air-gapped vaulting and immutable storage, in order to ensure backup data is not susceptible to adversarial manipulation while enabling integrity validation before restoration through advanced malware scanning. 

A controlled virtual environment is used to test recovery processes isolated from one another, along with point-in-time restoration capabilities that are capable of restoring systems back to a known, uncompromised state with minimal operational disruptions as a complement to this. 

Separate recovery enclaves are also crucial to preventing lateral movement and credential-based compromise, as backup infrastructure is decoupled from production networks, thus eliminating lateral movement pathways. This architecture ensures that security and compliance requirements are not treated as an afterthought but are integrally integrated, supported by comprehensive audit trails, tagging of data, and a verifiable chain of custody. These capabilities together provide organizations with a structured, audit-ready recovery posture that maintains business continuity, even under sustained cyber pressure, a transition from reactive incident response.

In an effort to maintain continuous visibility into backup repository integrity and behavior, organizations are extending the focus beyond safeguarding backup repositories in their resilience frameworks. There is an increasing trend among threat actors to employ persistence-driven techniques that alter backup configurations or introduce incremental data corruption to erode reliable recovery points over time—often without triggering immediate alerts. 

Unless granular monitoring is employed, manipulations of this kind can be undetected until the recovery process has been initiated, at which point recovery pathways may already be compromised. It is for this reason that enterprises are integrating advanced telemetry, behavioral analytics, and anomaly detection in backup ecosystems, enabling early detection of irregular access patterns, unauthorized configuration changes, and deviations in data consistency. 

By enhancing proactive visibility, enterprises can not only respond more quickly to incidents but also prevent adversaries from dismantling recovery capabilities silently. Rapid recovery is of little value if latent threats are reintroduced into production environments. 

Furthermore, it is important to ensure that recovered data is intact and uncompromised. In this regard, organizations are integrating validation layers, such as isolated forensic sandboxes and automated recovery testing, to verify backup integrity well in advance of a loss. 

By implementing a comprehensive architectural shift in which recovery is engineered as a fundamental capability instead of a reactive measure, enterprises are positioned to sustain operations with minimal disruption by embedding immutability, isolation, continuous monitoring, and trusted validation into data protection strategies from conception. 

Consequently, resilience is no longer based on the ability to evade every attack, but rather on the ability to restore systems as quickly and precisely as possible, especially when defenses have been breached inevitably. Cybersecurity effectiveness is no longer defined by absolute prevention, but rather by the assurance that controlled, reliable recovery can be achieved under adverse circumstances. 

A growing number of adversaries continue to develop techniques that bypass traditional defenses and target recovery mechanisms themselves, forcing organizations to adopt a design philosophy based on the expectation of compromise rather than treating compromise as an exception. 

In order to maintain operational continuity, it is imperative that security postures, continuous monitoring, and resilient recovery architectures are integrated cohesively. In order to mitigate the cascading impact of cyber incidents, enterprises should align detection capabilities with verified restoration processes and embed trust throughout the recovery lifecycle. 

The key to establishing resilience is not eliminating risk, but rather abiding by its ability to absorb disruption, restore critical systems with integrity, and sustain business operations without interruption in a world where cyber incidents have become an operational certainty rather than simply a possibility.

AI Was Meant to Help. So Why Is It Making Work Harder for Women in Indonesia?

 



Artificial intelligence is often presented as a neutral and forward-looking force that improves efficiency and removes human bias from decision-making. In practice, however, many women working in Indonesia’s gig economy experience these systems very differently. Rather than easing workloads, AI-driven platforms are intensifying existing pressures.

Recent research examining female gig workers introduces the concept of “AI colonialism.” This idea describes how older patterns of domination continue through digital systems. In this framework, powerful technology actors, largely based in wealthier regions, extract labour, data, and economic value from workers in developing countries, reinforcing unequal global relationships. The structure resembles historical colonial systems, but operates through algorithms and platforms instead of direct political control.

In Indonesia, platforms such as Gojek, Grab, Maxim, and Shopee rely heavily on informal workers. These companies have not transformed the nature of employment. Instead, they have digitised an already informal labour market. Workers are labelled as independent “partners,” which excludes them from basic protections such as minimum wages, paid sick leave, and maternity benefits. Earnings depend entirely on the number of completed tasks and algorithm-based performance scores.

For women, this structure intersects with what is often described as the “double burden,” where paid work must be balanced alongside unpaid domestic responsibilities. One delivery worker, Lia, begins her day before sunrise by preparing meals and organising her children’s routines. Only after completing these responsibilities can she log into the platform. As she explains, the system recognises only whether she is online, not the constraints shaping her availability.

Platform algorithms prioritise continuous, uninterrupted activity. Incentive systems often require completing a fixed number of orders within strict time windows. For workers managing caregiving roles, this creates structural disadvantages. Logging off to attend to family responsibilities can result in lost bonuses, while reducing work hours due to fatigue or health issues leads to declining performance metrics.

This reflects a greater economic reality in which unpaid domestic labour underpins the formal economy without recognition or compensation. Instead of addressing this imbalance, AI systems can intensify it. Another worker, Cinthia, observed a noticeable drop in job assignments after taking time off due to illness. The experience created a sense that the system penalises any interruption, making workers reluctant to pause even when necessary.

Although algorithms do not explicitly target women, they are designed around an ideal worker who is always available and unconstrained by caregiving duties. This assumption produces indirect but consistent disadvantage. The claim that digital platforms operate neutrally is further challenged by everyday experiences. For example, a driver named Yanti often informs passengers in advance that she is female, leading to frequent cancellations. While the system records these cancellations, it does not capture the gender bias behind them.

Safety concerns also shape participation. Many women avoid working late hours due to risk, which limits access to peak-demand periods and higher earnings. The system interprets this reduced availability as lower productivity. Scholars such as Virginia Eubanks have argued that automated systems frequently replicate and amplify existing social inequalities rather than eliminate them.

Similar patterns have been observed in other countries. In India, women working in ride-hailing services report lower average earnings, partly because safety considerations influence when and where they work. Algorithms, however, measure output without accounting for these risks.

Safety challenges persist even within delivery roles. Around 90% of women in group discussions reported choosing delivery work over ride-hailing due to perceived safety advantages, yet harassment remains a concern from both customers and other drivers. During the COVID-19 pandemic, gig workers were classified as essential, but their incomes declined sharply, in some cases by up to 67% in early 2020. To compensate, many worked more than 13 hours a day. Despite these conditions, platform performance systems remained unchanged, and illness-related breaks often resulted in lower ratings.

This inflicts a deeper impact in the contemporary labour control, where oversight is embedded within digital systems rather than managed by human supervisors. AI colonialism, in this sense, extends beyond ownership to the structure of control itself. Workers provide labour, time, and data, while platforms retain authority over decision-making processes.

In response, women workers have developed informal networks through messaging platforms to share information, warn others about unsafe situations, and adapt to algorithmic changes. They support each other by increasing activity on inactive accounts, lending money for operational costs, and collectively responding to account suspensions. When harassment occurs, information is circulated quickly to protect others.

These practices represent a form of mutual support rooted in shared vulnerability. Rather than relying on formal recognition as employees, many women build systems of protection among themselves. This surfaces a form of everyday resistance, where collective action becomes a strategy for navigating structural constraints.

Artificial intelligence is not inherently exploitative. However, when deployed within unequal economic systems, it can reinforce patterns of extraction and imbalance. As digital platforms continue to expand, understanding the lived experiences of workers, particularly women in developing economies, is essential. Behind every efficient system is a human reality shaped by trade-offs between income, safety, and dignity.


Rival Ransomware Gangs 0APT And Krybit Clash In Unusual Cyber Extortion Battle

 

A clash almost unseen among digital outlaws has begun - 0APT, a hacking collective, now warns it will unmask operatives from enemy faction Krybit. This shift came to light through surveillance of hidden online forums. Tension simmers beneath the surface of these underground circles. Rival gangs once operating in parallel seem to fracture under pressure. Trust, usually scarce, is vanishing faster than usual. Evidence points toward escalating friction inside ransomware communities. 

What began as covert threats may reshape alliances unexpectedly. Reports indicate 0APT sent a threat to Krybit, insisting on payment under risk of exposing private records - names, positions, operational files - if ignored. A limited set of claimed stolen materials was published shortly after, serving as evidence - a move mirroring classic dual-pressure methods seen in attacks on businesses. Yet using such an approach toward another illicit network stirs doubt around its real impact, given that public image matters little within hidden communities. 

Even so, the danger remains somewhat real. Because cybercrime networks depend on staying hidden, revealed identities might invite legal trouble or revenge attacks. From the exposed information, security analysts pulled login details tied to Krybit members - alongside digital currency wallets - hinting at weak points in how the group functions. Yet the full impact stays unclear. Now showing a blank page, Krybit's site now displays only a standard upkeep notice, hinting at disruptions tied to recent events. Little is known about the collective so far, mainly because big security analysts have published almost nothing on them - possibly a sign they are just beginning operations. 

On the opposite end, 0APT emerged around spring 2026 and gained attention fast, marked by complex tools and methods, even though some doubt surrounds how truthful their early reports of breaches really were. Odd as it seems, infighting among hackers has happened before. Earlier clashes included DragonForce going after opponents - BlackLock, then Mamona - by altering web pages and exposing private messages. 

In much the same way, activity aimed at RansomHub tied back to DragonForce, revealing ongoing friction between ransomware crews. This conflict taking shape between 0APT and Krybit signals changes in how cybercriminals operate - motives like money, dominance, and competition now spark open clashes. With ransomware networks evolving fast, these kinds of face-offs might happen more often, making it harder for security experts to follow the players involved.

UAE Businesses Warned of Escalating AI‑Powered Cyber Threats

 

UAE businesses are being urgently warned about a sharp rise in AI‑powered cyber threats that can compromise systems within hours, and sometimes even minutes, if organisations remain unprepared. Cybercriminals are increasingly using artificial intelligence to craft highly realistic phishing emails, deepfake voice and video impersonations, and automated attacks that exploit gaps in security before teams can respond. 

Nature of AI‑driven threats 

Attackers are leveraging generative AI to personalize scams at scale, including cloned emails, synthetic voices, and fake video calls that mimic senior executives or partners. These AI‑enabled methods make spear‑phishing and impersonation fraud far more convincing, increasing the chances that employees will authorise fraudulent transfers or share sensitive credentials. 

AI tools now allow adversaries to perform reconnaissance, scan for vulnerabilities, and launch password‑guessing and ransomware attacks in a fraction of the time it once took. Security experts note that many organisations now face same‑day compromises, where attackers move from initial access to data theft or system encryption within a single business day.

Impact on UAE firms and the economy 

The UAE’s role as a regional financial and technology hub makes it a prime target for state‑backed and criminal hacking groups that use AI to intensify their campaigns.Breaches can lead to substantial financial losses, reputational damage, regulatory penalties, and disruption of critical services, especially as digital‑government and smart‑city initiatives expand.

Cyber professionals recommend continuous staff training on spotting AI‑powered phishing and impersonation, tightening access controls, securing machine identities, and maintaining tested incident‑response and recovery plans. With AI adoption accelerating across industries, firms that act quickly to strengthen cyber resilience will be better positioned to withstand the next wave of AI‑enhanced cyber threats in the UAE.

Pre Stuxnet Fast16 Threat Revealed Targeting Engineering Environments


 

New discoveries regarding early stages of cyber sabotage are changing the historical timeline of offensive digital operations and revealing that sophisticated disruption techniques were developed well before they became widely popular. 

An undocumented malware framework that was discovered in the mid-2000s underscores the extent to which threat actors were already manipulating industrial and engineering systems with precision, laying the foundations for highly specialized cyber weapons that would develop later in time. 

A Lua-based malware framework, named fast16, which predates the outbreak of the Stuxnet worm by several years has been identified by cybersecurity researchers based on this context. According to a detailed analysis published by SentinelOne, the framework originated around 2005, with its operational focus focused on engineering and calculation software with high precision. 

The fast16 algorithm was designed rather than causing immediate system failure to introduce inaccuracies that propagate across interconnected environments by subtly corrupting computational outputs. With its lightweight scripting capabilities and seamless integration with C/C++, Lua is an excellent choice for modular malware development, allowing attackers to extend functionality without recompiling core components. 

Upon analyzing fast16, researchers identified distinct Lua artifacts, including bytecode signatures beginning with /x1bLua and environmental markers such as LUA_PATH, which allowed them to trace svcmgmt.exe, a sample which initially appeared benign, but ultimately appeared to be a part of the early attack framework.

Researchers Vitaly Kamluk and Juan Andrés Guerrero-Saade concluded that the malware's architecture suggested a deliberate intent to spread disruption through self-propagation mechanisms, effectively standardizing erroneous results across entire facilities through self-propagation mechanisms. This approach is a reflection of an early understanding of systemic compromise, which emphasizes data integrity rather than availability as the primary attack vector. 

Fast16 is estimated to have emerged at least five years before Stuxnet, widely regarded as the first digital weapon designed for physical disruption of the world. While fast16 offers a compelling precedent, despite the historical association between Stuxnet and state-sponsored efforts to disrupt Iran's nuclear infrastructure and later influence Duqu and other tools.

The report demonstrates that conceptual basis for cyber-physical sabotage had already been explored in earlier, less visible campaigns, suggesting a more advanced and complex evolution of offensive cyber capabilities than previously assumed. Further reverse engineering confirmed that fast16 did not conform to typical malware engineering patterns observed in the mid-2010s. 

In response to Vitaly Kamluk's observation, several implementation choices indicated that the project was developed much earlier than it was actually implemented, a view that SentinelOne later reinforced by environmental and code-level constraints. 

The sample exhibits compatibility limitations consistent with legacy systems, which can only be executed reliably on Windows XP and single-core processors, which were pre-existing when multi-core consumer processors were introduced by Intel in 2006.

In accordance with behavioral analysis, the implant implements a kernel-level component, fast16.sys, in conjunction with worm-like propagation routines to establish persistence. Moreover, its architecture predates other advanced threats such as Flame, as well as being among the earliest known examples of a Windows-based malware that embeds a Lua virtual machine as an integral component. 

Initially identified as a generic service wrapper, the svcmgmt.exe executable appears to have originated the framework. However, it was later discovered to contain the Lua 5.0 runtime and encrypted bytecode payload, which formed the framework. As indicated by the timestamp metadata, the build date is August 2005, and the submission to VirusTotal was more than a decade later, further supporting the fact that the program has a long history.

In an in-depth inspection, it was revealed that Windows NT subsystems were tightly integrated, including direct interaction with the file system, registry, service control, and networking APIs. In addition to the Lua bytecode containing the core execution logic, an associated driver whose PDB path dates July 2005 enables interception and manipulation of executable data while the data is being read from the disk, an advanced stealth and control technique. 

Additionally, references to "fast16" have been found within driver lists associated with sophisticated intrusion toolsets reportedly linked to the National Security Agency, which were disclosed by Shadow Brokers. By combining technical lineage with leaked operational tooling, this intersecting information further exacerbates the ambiguity surrounding the framework's origins, highlighting its significance within the early development of cyber-physical attack methodologies. 

Further analysis positions svcmgmt.exe as the operational core of the framework, operating as a highly flexible carrier that can adapt execution paths depending on runtime conditions. SentinelOne asserts that embedded forensic markers, particularly a path in the PDB, establish a link between the sample and deconfliction signatures which were revealed in leaks attributed to tools used by the National Security Agency, suggesting that the origin is far more sophisticated. 

From an architectural perspective, the module consists of three components: Lua bytecode controlling configuration and propagation logic, a dynamic library that assists with configuration, and a kernel-level driver (fast16.sys) that performs low-level manipulations. After installation of the malware as a Windows service, it can elevate privileges by activating the kernel implant and initiating a controlled propagation routine that targets legacy Windows environments with weak authentication controls once deployed. 

There is a particular emphasis on operational stealth in its conditional execution, which either occurs manually or when specific security products are detected through registry inspections, indicating an early but deliberate effort to extend its spread. On a functional level, the kernel driver represents the framework's sabotage capability, intercepting executable flows and modifying them according to rule-based rules, especially against binaries compiled using Intel C/C++ tools. As a result, the outputs of high-precision engineering and simulation platforms such as LS-DYNA, PKPM, and MOHID can be precisely manipulated. 

Through the introduction of subtle, systematic deviations into mathematical models, this malware can negatively impact simulation accuracy, undermine research integrity, and affect real-world engineering outcomes over the long term. Further enhancement of situational awareness is provided by supporting modules; for example, a network monitoring component logs connection information through Remote Access Service hooks, strengthening the framework's surveillance capabilities.

Modular separation of a stable execution wrapper from encrypted, task-specific payloads promotes a reusable design philosophy, thus allowing operators to tailor deployments while maintaining a stable outer binary footprint. As a result of these findings, the timeline for cyber-physical attacks has been significantly revised in comparison to the broader threat landscape. 

A correlation with artifacts released by the Shadow Brokers, as well as a correlation with early offensive toolchains, suggest that capabilities often associated with later campaigns, including Stuxnet, were being developed and could have been deployed years earlier. As a result, fast16 is no longer merely an isolated discovery, but also a transitional framework bridging covert early stage experimentation with the more visible development of advanced persistent threats.

During the period covered by this paper, state-aligned actors operationalized long-term, precision-focused sabotage strategies well before such activities became public knowledge, a year in which software became a major tool for influencing physical systems on a strategic level. 

A number of factors, including the emergence of fast16, reframe long-held assumptions about the origins of cyberphysical sabotage, demonstrating that highly targeted, computation-focused attack models were operational well in advance of their public recognition. This modular design, selective propagation logic, and precision-driven payloads demonstrate a maturity typically associated with advanced persistent threat campaigns of a later stage.

The report emphasizes, in addition to its strategic significance, the shift away from disruptive attacks that target system availability to covert manipulation of data integrity within critical engineering environments. 

Fast16 is therefore both an historical anomaly and the prototype of modern state-aligned cyber operations, in which subtle interference can have a far-reaching impact without immediate detection within critical engineering environments.

Google Chrome Introduces “Skills” to Reuse AI Prompts Across Web Pages with Gemini Integration

 

Google has announced a new wave of AI-powered enhancements for its Chrome browser, unveiling a feature called “Skills.” This addition enables users to store and reuse their preferred AI prompts across different websites, eliminating the need to repeatedly type them.

The new functionality builds on Chrome’s integration with Gemini, which arrived as competition in the browser space heats up with offerings from companies like OpenAI (Atlas), Perplexity (Comet), and The Browser Company (Dia).

Gemini already enables users to interact with web pages by asking questions, generating summaries, or completing tasks. With the addition of Skills, users can now save frequently used prompts and activate them instantly whenever needed.

For example, Google notes that users who regularly ask Gemini for vegan alternatives while browsing recipes can save that instruction as a Skill and apply it seamlessly across multiple sites. These prompts can be saved directly from chat history and later accessed by typing a forward slash (/) or clicking the plus (+) icon. Once selected, the Skill executes on the current page and can also extend to other selected tabs.

Google highlighted that Skills remain flexible, allowing users to modify them at any time. Early testing showed that adopters used the feature for tasks such as tracking nutrition metrics in recipes, comparing products while shopping, and summarizing long-form content.

To simplify onboarding, Google is also launching a Skills library featuring ready-made prompts for common use cases like productivity, budgeting, shopping, and cooking. Users can add these pre-built Skills to their collection and customize them as needed.

Similar to other Gemini-powered actions in Chrome, the browser will request user approval before carrying out sensitive tasks, such as sending emails or scheduling calendar events.

The rollout of Skills begins today for desktop Chrome users logged into their Google accounts. Initially, the feature will only be available when the browser language is set to English (US).

New Malware “Storm” Steals Browser Data and Hijacks Sessions Without Passwords

 



A newly identified infostealer called Storm has emerged on underground cybercrime forums in early 2026, signalling a change in how attackers steal and use credentials. Priced at under $1,000 per month, the malware collects browser-stored data such as login credentials, session cookies, and cryptocurrency wallet information, then covertly transfers the data to attacker-controlled servers where it is decrypted outside the victim’s system.

This change becomes clearer when compared to earlier techniques. Traditionally, infostealers decrypted browser credentials directly on infected machines by loading SQLite libraries and accessing local credential databases. Because of this, endpoint security tools learned to treat such database access as one of the strongest indicators of malicious activity.

The approach began to break down after Google Chrome introduced App-Bound Encryption in version 127 in July 2024. This mechanism tied encryption keys to the browser environment itself, making local decryption exponentially more difficult. Initial bypass attempts relied on injecting into browser processes or exploiting debugging protocols, but these techniques still generated detectable traces.

Storm avoids this entirely by skipping local decryption. Instead, it extracts encrypted browser files and quietly sends them to attacker infrastructure, removing the behavioural signals that endpoint tools typically rely on. It extends this model by supporting both Chromium-based browsers and Gecko-based browsers such as Firefox, Waterfox, and Pale Moon, whereas tools like StealC V2 still handle Firefox data locally.

The data collected includes saved passwords, session cookies, autofill entries, Google account tokens, payment card details, and browsing history. This combination gives attackers everything required to rebuild authenticated sessions remotely. In practice, a single compromised employee browser can provide direct access to SaaS platforms, internal systems, and cloud environments without triggering any password-based alerts.

Storm also automates session hijacking. Once decrypted, credentials and cookies appear in the attacker’s control panel. By supplying a valid Google refresh token along with a geographically matched SOCKS5 proxy, the platform can silently recreate the victim’s active session.

This technique aligns with earlier research by Varonis Threat Labs. Its Cookie-Bite study showed that stolen Azure Entra ID session cookies can bypass multi-factor authentication, granting persistent access to Microsoft 365. Similarly, its SessionShark analysis demonstrated how phishing kits intercept session tokens in real time to defeat MFA protections. Storm packages these methods into a commercial subscription service.

Beyond credentials, the malware collects files from user directories, extracts session data from applications like Telegram, Signal, and Discord, and targets cryptocurrency wallets through browser extensions and desktop applications. It also gathers system information and captures screenshots across multiple monitors. Most operations run in memory, reducing the likelihood of detection.

Its infrastructure design adds resilience. Operators connect their own virtual private servers to Storm’s central system, routing stolen data through infrastructure they control. This setup limits the impact of takedowns, as enforcement actions are more likely to affect individual operator nodes rather than the core service.

Storm supports multi-user operations, allowing teams to divide responsibilities such as log access, malware build generation, and session restoration. It also automatically categorises stolen credentials by service, with visible rules for platforms including Google, Facebook, Twitter/X, and cPanel, helping attackers prioritise targets.

At the time of analysis, the control panel displayed 1,715 log entries linked to locations including India, the United States, Brazil, Indonesia, Ecuador, and Vietnam. While it is unclear whether all entries represent real victims or test data, variations in IP addresses, internet service providers, and data volumes suggest ongoing campaigns.

The logs include credentials associated with platforms such as Google, Facebook, Twitter/X, Coinbase, Binance, Blockchain.com, and Crypto.com. Such information often feeds into underground credential marketplaces, enabling account takeovers, fraud, and more targeted intrusions.

Storm is offered through a tiered pricing model: $300 for a seven-day trial, $900 per month for standard access, and $1,800 per month for a team licence supporting up to 100 operators and 200 builds. Use of an additional crypter is required. Notably, once deployed, malware builds continue operating even after a subscription expires, allowing ongoing data collection.

Security researchers view Storm as part of a broader evolution in credential theft. By shifting decryption to remote servers, attackers avoid detection mechanisms designed to identify on-device activity. At the same time, session cookie theft is increasingly replacing password theft as the primary objective.

The data collected by such tools often marks the beginning of further attacks, including logins from unusual locations, lateral movement within networks, and unauthorised access patterns.


Indicators of compromise include:

Alias: StormStealer

Forum ID: 221756

Registration date: December 12, 2025

Current version: v0.0.2.0 (Gunnar)

Build details: Developed in C++ (MSVC/msbuild), approximately 460 KB in size, targeting Windows systems


This advent of Storm underlines how cybercriminal tools are becoming more advanced, automated, and difficult to detect, requiring organisations to strengthen monitoring of sessions, user behaviour, and access patterns rather than relying solely on traditional credential protection methods.


AI-Driven Hack Breach Hits Government Agencies

 

A lone attacker reportedly used Claude and GPT-4.1 to breach nine Mexican government agencies, exposing data tied to 195 million citizens and showing how generative AI can accelerate cybercrime. The incident, which ran from December 2025 to February 2026, is a stark warning that AI can now amplify a single operator into something closer to a full attack team. 

Between late 2025 and early 2026, the attacker used Claude Code to carry out about 75% of remote commands during the intrusion. Researchers found 1,088 prompts across 34 active sessions, which led to 5,317 AI-executed commands on live victim systems. That level of automation meant the attacker could move through government networks far faster than a human-only workflow would allow.

The operation did not rely on one model alone. When Claude encountered limits, the attacker turned to ChatGPT for help with lateral movement, credential mapping, and other technical steps that supported the breach. A custom 17,550-line Python script then funneled stolen data through OpenAI’s API, generating 2,597 structured intelligence reports across 305 internal servers. 

The stolen material reportedly included tax records, voter information, employee credentials, and other sensitive government data. Beyond the scale of the theft, the bigger problem is what this means for defense teams: AI can shorten the time needed to find weaknesses, write exploits, and organize stolen data. That compression makes traditional detection and response windows much harder to meet. 

This case shows that cybercriminals no longer need large teams to mount sophisticated operations. With the right prompts, a single attacker can use commercial AI systems to plan, automate, and scale an intrusion in ways that were once reserved for advanced groups. Anthropic said it investigated, disrupted the activity, and banned the accounts involved, but the broader lesson is clear: security defenses now need to account for AI-accelerated attacks as a mainstream threat.