Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Retailer Secures Website After Customer Data Leak Risk Identified


 

Express has quietly fixed a security flaw that permitted unauthorized access to customer order data following a significant lapse in web application security. This vulnerability exposed sensitive information ranging from customer names, emails, telephone numbers, shipping details, and partial payment data through search engine indexing, which resulted in an inadvertent public disclosure of order confirmation pages through search engine indexing.

There were at least a dozen such records appearing in search results, demonstrating that sequential order identifiers embedded within URLs may be exploited without sophisticated intrusion techniques. In a fraud investigation conducted by an independent security researcher, the issue was uncovered, which highlights how seemingly routine investigations can reveal deeper systemic weaknesses in data handling and access controls. The company was then able to take immediate and corrective measures.

A wide variety of personally identifiable information was disclosed in the exposed records, including customer name, phone number, email address, billing and delivery locations as well as masked payment card information, which was accessible via publicly accessible order confirmation pages. Initially, users could enumerate order records by altering parameters within the web address due to inadequate access controls and predictable URL patterns.

In investigating a suspicious transaction involving a family member, Rey Bango discovered that a simple search query could reveal unrelated customer orders that had previously been indexed by search engines when investigating a suspicious transaction. 

Upon the disclosure of this incident, Express, which is now owned by WHP Global, took steps to remediate the issue. However, the company has not yet clarified whether affected individuals will receive a formal notification. Despite reaffirming the organization's commitment to safeguarding consumer data and encouraging responsible reporting of vulnerabilities, Joe Berean did not outline a structured reporting process for vulnerabilities. 

A number of data exposure incidents have been linked to misconfigured web assets in the past year, reinforcing the persistent gaps in secure development practices as well as the challenges that enterprises must overcome when preventing unintended data leaks at large scales. 

The discovery emerged largely as an accident, resulting from Rey Bango's attempt to validate a potentially fraudulent transaction involving a family member's account after further investigation. In the absence of a clearly defined reporting channel, he escalated the issue by submitting a report in order to ensure prompt resolution. Based on his findings, search engines could surface unrelated records of customers by querying order numbers through indexed confirmation pages coupled with sequential order identifiers. 

As a result of independent verification, minor manipulations of URL parameters enabled the unauthorized access to other users' order histories and personal information, a vulnerability that could be amplified through automated enumeration. After the flaw was disclosed, Express addressed it, but the response evolved to clarify whether the affected customers would be notified and whether forensic logs could be used to determine the extent of unauthorized access. 

The company’s marketing head, Joe Berean, reinforced the company's commitment to data security, but offered limited transparency regarding incident response measures, such as the absence of information about a formal vulnerability disclosure framework or regulatory notification requirements. 

Despite persistent governance gaps, the lack of clarity regarding follow-up compliance, particularly concerning U.S. breach disclosure requirements, highlights these shortcomings. As seen in recent disclosures involving Home Depot and Petco, this episode aligns with a general pattern of exposure incidents that are related to misconfigurations. Because of overlooked security controls, sensitive customer data remains accessible, highlighting the ongoing challenges of enforcing robust web application security. 

The incident illustrates how relatively simple design oversights, such as predictable identifiers and improperly restricted web resources, can quickly morph into large-scale privacy risks, when combined with search engine indexing and absent disclosure mechanisms. 

The company has taken steps to resolve the immediate vulnerability, but the lack of clarity around notification to customers, audit logging, and formal vulnerability intake procedures raises concerns regarding incident readiness and accountability. 

Due to the expansion of digital commerce footprints, the case illustrates the necessity of incorporating secure-by-design principles, in addition to implementing robust access controls and maintaining transparent reporting mechanisms in order to address flaws before they become more serious. 

When these safeguards are not in place, even routine transactional systems can become unintentional points of vulnerability, reinforcing the necessity of continuous security validation throughout the lifecycle of an application.

Fake Court Summons And Survey Scams Surge As Regions Bank Warns Of Rising Consumer Fraud Risks

 


Fear remains one of the most powerful tools scammers use, and today’s fraud tactics are evolving to exploit it more effectively than ever. Fake court summons and deceptive online survey scams are now being widely used to trick individuals into revealing sensitive information or making payments. Regions Bank has raised awareness around these threats, emphasizing that such schemes are designed to steal passwords, drain bank accounts, or silently install malware on personal devices. 

One of the more alarming trends involves fraudulent legal notices. Victims may receive messages claiming they missed a court date, failed to pay a toll, or owe a penalty. These alerts often create a sense of urgency, warning of arrest or severe consequences if immediate action is not taken. The goal is to push individuals into reacting quickly without verifying the information. Instead of legitimate resolution channels, these messages direct users to click suspicious links, scan QR codes, or call phone numbers that connect them directly to scammers.  

Although these communications can appear convincing, they often contain clear warning signs. Aggressive or threatening language, demands for immediate payment, and instructions to use unconventional methods such as gift cards or wire transfers are strong indicators of fraud. Genuine legal authorities follow formal processes and provide verifiable documentation, allowing individuals to confirm claims through official sources. Ignoring these red flags can lead to serious financial and data security consequences. Another emerging tactic involves fake CAPTCHA prompts. 

These scams exploit the familiarity of “I’m not a robot” verification tools but introduce unusual instructions, such as pressing specific keyboard shortcuts. What seems like a routine step can actually trigger hidden malicious code, potentially installing malware on the user’s device. Legitimate CAPTCHA systems are simple and never require complex or unexpected actions, making any deviation a likely sign of a scam. Survey scams represent another widespread threat. These schemes lure victims with promises of rewards such as cash, gift cards, or free products. After completing a series of questions, users are told they have “won” and are asked to provide payment details for a small fee. 

In reality, the reward never materializes, and the scammers gain access to valuable financial information. Organizations like the Better Business Bureau have noted a rise in such scams, highlighting unrealistic offers, vague company information, suspicious links, and poor grammar as common warning signs. If individuals encounter these scams, experts recommend deleting the message immediately, avoiding any engagement, and reporting the incident through official platforms such as the Internet Crime Complaint Center. Acting quickly is critical, especially if personal or financial information has already been shared. 

Ultimately, staying vigilant is the most effective defense. Avoid clicking on unknown links, verify information through trusted sources, enable multi-factor authentication, and regularly monitor financial accounts for unusual activity. These scams rely on urgency, fear, and enticing rewards to bypass rational thinking. While tactics continue to evolve, a cautious and informed approach remains the strongest way to protect against fraud in an increasingly digital environment.

Bank of America Bets Big on Risky Anthropic AI

 

Bank of America is aggressively expanding its use of Anthropic's advanced AI technology, even as U.S. regulators issue stark cybersecurity warnings. The bank's commitment highlights a broader trend where nearly 70% of financial institutions integrate AI into operations, prioritizing innovation over potential risks. This move comes amid global concerns about Anthropic's Claude Mythos Preview model, which has detected thousands of high-severity vulnerabilities in major operating systems and browsers. 

In early April 2026, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell urgently met with CEOs from top U.S. banks, including Bank of America, to flag risks from Mythos. Officials warned that deploying the model could expose customer personal data to cyber threats, prompting Anthropic to limit access to a select group of tech and banking experts. World leaders echoed these fears: Bank of England Governor Andrew Bailey called AI a "very serious challenge," while ECB President Christine Lagarde supported restrictions on the technology. 

Anthropic itself has cautioned about the dangers, stating that rapid AI progress could spread powerful vulnerability-detection capabilities to unsafe actors, with severe fallout for economies and national security. Despite this, banks like JPMorgan, Goldman Sachs, Citigroup, and Bank of America are testing Mythos to bolster their own defenses. Canadian regulators and European counterparts have also raised alarms, underscoring the technology's global implications. 

Bank of America leads in AI adoption, with over 90% of its 200,000+ employees using the tools daily and a client-facing AI assistant logging three billion interactions in 2025 alone. Backed by a $13.5 billion tech budget—including $4 billion for AI initiatives—the bank focuses on end-to-end process transformation to boost revenue, client experience, and efficiency. Recent rollouts include an AI tool for financial advisors to identify prospects and summarize meetings. 

Bank of America's CTO Hari Gopalkrishnan emphasized balancing scale with governance at the Semafor World Economy 2026 summit, noting, "If you overdo it, you stall innovation. If you underdo it, you introduce a lot of risk." The strategy shifts from small proofs-of-concept to large-scale applications, aiming for measurable ROI while navigating regulatory scrutiny. As AI reshapes banking, Bank of America's bold push tests the fine line between opportunity and peril.

Hackers Use Hidden QEMU Linux VMs to Evade Windows Security and Launch Stealth Attacks

 

Cybersecurity experts have uncovered a stealthy tactic where attackers bypass Windows defenses by running concealed Linux virtual machines using QEMU. Researchers warn that these hidden environments allow threat actors to maintain persistent access, steal sensitive data, and even deploy ransomware.

Earlier findings highlighted how Russian-linked groups exploited Microsoft Hyper-V to install covert Linux virtual machines on targeted systems. However, because enterprise environments typically restrict or closely monitor Hyper-V, attackers have shifted to less scrutinized alternatives.

Security firm Sophos reports active misuse of QEMU, which enables attackers to operate a full Linux system within a Windows host. Activities carried out inside these virtual machines are largely undetectable by endpoint protection tools such as Windows Defender.

“Rather than deploying a pre-built toolkit, the attackers manually install and compile their full attack suite within the VM, including Impacket, KrbRelayx, Coercer, BloodHound.py, NetExec, Kerbrute, Metasploit, and supporting libraries for Python, Rust, Ruby, and C++,” Sophos said in a report detailing active exploitation campaigns.

Attackers frequently rely on Alpine Linux, particularly version 3.22.0, due to its minimal size and low resource consumption. This allows the malicious VM to operate with almost no visible impact on the host system.

Once their objectives are achieved, attackers can simply shut down the VM, erase its image, and disappear without leaving significant traces.

“Attackers are drawn to QEMU and more common hypervisor-based virtualization tools like Hyper-V, VirtualBox, and VMware,” Sophos researchers said.

“Malicious activity within a virtual machine (VM) is essentially invisible to endpoint security controls and leaves little forensic evidence on the host itself.”

One group leveraging this technique is linked to the PayoutsKing ransomware campaign and tracked as STAC4713. In observed cases, attackers used QEMU to establish covert reverse SSH backdoors, enabling them to deploy additional malicious payloads.

Even though a basic QEMU setup can run without administrative privileges, attackers often escalate access by launching VMs under a SYSTEM account via scheduled tasks. They disguise virtual disk files as innocuous items like “vault.db” and later shift to obscure DLL filenames such as “birsv.dll.”

Through these hidden VMs, attackers create reverse SSH tunnels to remote servers, granting full control over compromised systems. They also exploit built-in Windows applications like Paint, Notepad, and Edge to explore network shares and access files.

Another threat actor, identified as STAC3725, deployed a QEMU-based VM in February to conduct credential harvesting and system reconnaissance. This setup enabled activities such as Kerberos enumeration, Active Directory mapping, and even running FTP servers for staging malware or exfiltrating data.

“The abuse of QEMU represents a growing evasion trend where threat actors leverage legitimate virtualization software to conceal malicious actions from endpoint protection agents and audit logs,” Sophos warns.

“A hidden VM with a pre-loaded or compiled attack toolkit can enable a threat actor to have long-term access to a network, providing the ability to deploy malware, harvest credentials, and move laterally without leaving evidence on the host itself.”

To mitigate such risks, researchers advise IT teams to regularly audit systems for unexpected QEMU installations and suspicious scheduled tasks, especially those running under SYSTEM-level privileges. Indicators of compromise may include unusual SSH port forwarding (particularly port 22), outbound SSH connections from uncommon ports, and virtual disk files with atypical extensions such as .db, .dll, or .qcow2.

Security Researchers Uncover QEMU-Powered Evasion in Payouts King Ransomware


 

Several recent incidents of ransomware activity attributed to the Payouts King operation have highlighted a systematic shift toward virtualization-assisted intrusions, with attackers embedding QEMU as an execution layer within compromised systems. 

QEMU instances can be configured as reverse SSH backdoors, enabling operators to create concealed virtual machines, which operate independently of a host system, effectively running malicious payloads and maintaining persistence outside the visibility of conventional endpoint security measures. 

In the course of the investigation, it has been revealed that at least two parallel campaigns have been identified, one directly connected with Payouts King and the other as a result of the exploitation of CitrixBleed 2 flaw. Both of the campaigns are leveraging the power of virtualization, not only for the purpose of evasion, but also for the purpose of staging post-exploitation campaigns. 

As part of their intrusion into these isolated environments, attackers use tools such as Rclone, Chisel, and BusyBox to obtain credential information, investigate Active Directory, enumerate Kerberos, and stage data via temporary FTP servers. 

In addition to this evolution, a broader operational trend is being observed in which ransomware actors, including suspected initial access brokers, are moving from traditional encrypt-and-extort models to layered intrusion strategies that emphasize stealth, extended access, and pre-encryption intelligence gathering, which reduces detection windows and challenges reliance on only file-based security indicators. 

In essence, QEMU is an open-source emulator and virtualizing framework that enables the running of full operating systems as virtual machines on a host, a capability that is increasingly being exploited by cyber criminals for malicious purposes. Due to the fact that host-based security controls do not provide visibility into processes executed within these isolated environments, attackers can leverage QEMU instances in order to deploy payloads, store tooling, and set up covert remote access channels using SSH without causing any disruption. 

There is precedent for using this technique, as it has been used in previous operations linked to the 3AM ransomware group, the LoudMiner campaign, and the CRON#TRAP activity cluster. The analysis conducted by Sophos in recent months provides an in-depth understanding of its operationalization across two distinct intrusion sets, including the Payouts King ransomware. This was observed since November 2025 and has been attributed to the Payouts King ransomware operation. 

It overlaps with activity associated with GOLD ENCOUNTER, which is known to target hypervisors and deploy encryptors within VMware and ESXi environments. Attackers create a scheduled task called TPMProfiler in this campaign that initiates a hidden QEMU virtual machine with SYSTEM privileges by using virtual disk images disguised as benign databases and DLLs as virtual disk images. 

Through carefully configured port forwarding, the adversary maintains isolation within the virtual layer while enabling reverse SSH access into the compromised host. Alpine Linux 3.22.0 is typically deployed in this environment, preloaded with offensive tools such as AdaptixC2, Chisel, BusyBox, and Rclone that facilitate communication, reconnaissance, and data movement between the various components of the system. This parallel campaign, identified in February as STAC3725, exploits CitrixBleed 2 (CVE20255777) flaw in NetScaler ADC and Gateway appliances in an attempt to gain initial access. 

Once credentials have been harvested and domain-level reconnaissance has been conducted, QEMU-based virtual environments can be employed to perform similar functions, strengthening virtualization’s role as a stealth-oriented, persistent attack substrate. 

An attacker initiates post-compromise activity by delivering a staged archive (an.zip) into systems breached through NetScaler as part of the STAC3725 intrusion chain. In this case, the embedded executable (an.exe) provisioned an AppMgmt service, created a privileged local account (CtxAppVCOMService), and established a persistent remote access channel through ScreenConnect's MSI package client application.

In the context of SYSTEM privileges, ScreenConnect establishes a connection to the relay infrastructure and orchestrates the delivery of a secondary archive containing the concealed QEMU environment. When unpacked with 7-Zip and executed by qemu-system-x86_64.exe, the Alpine Linux virtual machine can be booted from a custom disk image using the qcow2 driver. 

A separate layer of isolation allows adversaries to deliberately build their attack frameworks in situ instead of relying on pre-configured toolsets, including Impacket, KrbRelayx, Coercer, BloodHound.py, NetExec, Kerbrute, and Metasploit, as well as multi-language dependencies spanning Python, Rust, Ruby, and C++, within which they compile tools. 

A modular approach to postexploitation provides a variety of post-exploitation activities, including credential harvesting, Kerberos enumeration, Active Directory mapping, and data staging by using lightweight FTP services. As a result of these auxiliary actions, host-level manipulation continues, including enabling WDigest credential storage, installing forensic utilities to alter Microsoft Defender exclusions, executing reconnaissance commands, and loading vulnerable kernel drivers to weaken system defenses. 

Following-on activity varies from incident to incident, which further suggests a division of labor consistent with initial access broker ecosystems. Persistence mechanisms include enterprise deployment tools and peer-to-peer networking frameworks such as NetBird, along with attempts to extract browser session information and disable endpoint protection via scripting. 

Together, these operations reinforce the increasing use of virtualization-supported evasion, where malicious activity is effectively dispersed into transient, attacker-controlled environments that can be hidden from traditional monitoring techniques. 

In accordance with defensive guidance, it is imperative that anomalous QEMU deployments, unauthorized privilege-level scheduled tasks, irregular SSH tunneling behavior, and atypical virtual disk artifacts be detected, especially since Zscaler's intelligence indicates that this ransomware cluster is associated with tactics historically associated with BlackBasta affiliates, such as phishing via Microsoft Teams and the abuse of remote assistance tools. 

All in all, these findings indicate an increased level of operational maturity among the Payouts King ecosystem, which integrates stealth infrastructure, flexible access vectors, and virtualization-based execution into a cohesive attack model that extends far beyond conventional ransomware techniques. 

A Zscaler attribution report also confirms this trajectory, pointing to overlapping tradecraft such as spam-driven intrusion attempts, social engineering deployments via Microsoft Teams, and abuse of remote access utilities by former BlackBasta affiliates. 

It is important to note that the ransomware itself reflects this sophistication, consisting of high levels of obfuscation, anti-analysis safeguards, and persistence mechanisms embedded in scheduled tasks so as to actively terminate security processes through low-level system calls. Its encryption protocol, which uses AES-256 in CTR mode combined with RSA-4096 intermittent encryption for large files, demonstrates a calculated balance between speed and impact. 

As a result, extortion workflows direct victims to leak portals on the dark web. Due to increasing virtualization abuse blurring traditional endpoint visibility boundaries, defenders must shift their focus toward behavioral correlation, privilege anomaly detection, and deep examinations of orchestration patterns at the system level, as these campaigns reflect a broader shift towards ransomware operations that are designed to remain persistent, precise, and invisibly invisible within organizations.

Salesforce’s New “Headless 360” Lets AI Agents Run Its Platform

 


Salesforce has introduced what it describes as the most crucial architectural overhaul in its 27-year history, launching a new initiative called “Headless 360.” The update is designed to allow artificial intelligence agents to control and operate the company’s entire platform without requiring a traditional graphical interface such as a dashboard or browser.

The announcement was made during the company’s annual TDX developer conference in San Francisco, where Salesforce revealed that it is releasing more than 100 new developer tools and capabilities. These tools immediately enable AI systems to interact directly with Salesforce environments. The move reflects a deeper shift in enterprise software, where the rise of intelligent agents capable of reasoning and executing tasks is forcing companies to rethink whether conventional user interfaces are still necessary.

Salesforce’s answer to that question is direct: instead of designing software primarily for human interaction, the platform is now being rebuilt so that machines can access and operate it programmatically. According to the company, this transformation began over two years ago with a strategic decision to expose all internal capabilities rather than keeping them hidden behind user interfaces.

This shift is taking place during a period of uncertainty in the broader software industry. Concerns that advanced AI models developed by companies like OpenAI and Anthropic could disrupt traditional software business models have already impacted market performance. Industry indicators, including software-focused exchange-traded funds, have declined substantially, reflecting investor anxiety about the long-term relevance of existing SaaS platforms.

Senior leadership at Salesforce has indicated that the new architecture is based on practical challenges observed while deploying AI systems across enterprise clients. According to internal insights, building an AI agent is only the initial step. Organizations also face ongoing challenges related to development workflows, system reliability, updates, and long-term maintenance.

To address these challenges, Headless 360 is structured around three foundational pillars.

The first pillar focuses on development flexibility. Salesforce has introduced more than 60 tools based on Model Context Protocol, along with over 30 pre-configured coding capabilities. These allow external AI coding agents, including systems such as Claude Code, Cursor, Codex, and Windsurf, to gain direct, real-time access to a company’s Salesforce environment. This includes data, workflows, and underlying business logic. Developers are no longer required to use Salesforce’s own integrated development environment and can instead operate from any terminal or external setup.

In addition, Salesforce has upgraded its native development environment, Agentforce Vibes 2.0, by introducing an “open agent harness.” This system supports multiple agent frameworks, including those from OpenAI and Anthropic, and dynamically adjusts capabilities depending on which AI model is being used. The platform also supports multiple models simultaneously, including advanced systems like Claude Sonnet and GPT-5, while maintaining full awareness of the organization’s data from the start.

A notable technical enhancement is the introduction of native React support. During demonstrations, developers created a fully functional application using React instead of Salesforce’s traditional Lightning framework. The application connected to Salesforce data through GraphQL while still inheriting built-in security controls. This significantly expands front-end flexibility for developers.

The second pillar focuses on deployment. Salesforce has introduced an “experience layer” that separates how an AI agent functions from how it is presented to users. This allows developers to design an experience once and deploy it across multiple platforms, including Slack, mobile applications, Microsoft Teams, ChatGPT, Claude, Gemini, and other compatible environments. Importantly, this can be done without rewriting code for each platform. The approach represents a change from requiring users to enter Salesforce interfaces to delivering Salesforce-powered experiences directly within existing workflows.

The third pillar addresses trust, control, and scalability. Salesforce has introduced a comprehensive set of tools that manage the entire lifecycle of AI agents. These include systems for testing, evaluation, monitoring, and experimentation. A central component is “Agent Script,” a new programming language designed to combine structured, rule-based logic with the flexible reasoning capabilities of AI models. It allows organizations to define which parts of a process must follow strict rules and which parts can rely on AI-driven decision-making.

Additional tools include a Testing Center that identifies logical errors and policy violations before deployment, custom evaluation systems that define performance standards, and an A/B testing interface that allows multiple agent versions to run simultaneously under real-world conditions.

One of the key technical challenges addressed by Salesforce is the difference between probabilistic and deterministic systems. AI agents do not always produce identical results, which can create instability in enterprise environments where consistency is critical. Early adopters reported that once agents were deployed, even small modifications could lead to unpredictable outcomes, forcing teams to repeat extensive testing processes.

Agent Script was developed to solve this problem by introducing a structured framework. It defines agent behavior as a state machine, where certain steps are fixed and controlled while others allow flexible reasoning. This approach ensures both reliability and adaptability.

Salesforce also distinguishes between two types of AI system architectures. Customer-facing agents, such as those used in sales or support, require strict control to ensure they follow predefined rules and maintain brand consistency. These operate within structured workflows. In contrast, employee-facing agents are designed to operate more freely, exploring multiple paths and refining their outputs dynamically before presenting results. Both systems operate on a unified underlying architecture, allowing organizations to manage them without maintaining separate platforms.

The company is also expanding its ecosystem. It now supports integration with a wide range of AI models, including those from Google and other providers. A new marketplace brings together thousands of applications and tools, supported by a $50 million initiative aimed at encouraging further development.

At the same time, Salesforce is taking a flexible approach to emerging technical standards such as Model Context Protocol. Rather than relying on a single method, the company is offering APIs, command-line interfaces, and protocol-based integrations simultaneously to remain adaptable as the industry evolves.

A real-world example surfaced during the announcement demonstrated how one company built an AI-powered customer service agent in just 12 days. The system now handles approximately half of customer interactions, improving efficiency while reducing operational costs.

Finally, Salesforce is also changing its business model. The company is shifting away from traditional per-user pricing toward a consumption-based approach, reflecting a future where AI agents, rather than human users, perform the majority of work within enterprise systems.

This transformation suggests a new layer in strategic operations. Instead of resisting the rise of AI, Salesforce is restructuring its platform to align with it, betting that its existing data infrastructure, enterprise integrations, and accumulated operational logic will continue to provide value even as software becomes increasingly autonomous.

Nvidia’s AI Launch Sparks Quantum Stock Surge, Minting Xanadu’s CEO a Billionaire

 

Quantum computing stocks jumped after Nvidia unveiled its Ising open-source AI model family, a move that investors interpreted as a strong validation of the sector. The result was a sharp rally in several names, with Xanadu standing out as the biggest winner and its founder Christian Weedbrook briefly joining the billionaire ranks.

The core issue is that Nvidia’s announcement did not introduce a new quantum computer; instead, it introduced software tools aimed at two of quantum computing’s hardest problems: calibration and error correction. Nvidia said Ising can make decoding up to 2.5 times faster and three times more accurate than pyMatching, which helped convince traders that the path to practical quantum systems may be improving faster than expected. 

That enthusiasm quickly turned into extreme stock moves. Xanadu’s shares climbed from under $8 to roughly $40 in six trading sessions, while the Toronto exchange paused trading several times because of the speed of the move. Similar gains appeared across the sector, including D-Wave, IonQ, Rigetti, Infleqtion, and Quantum Computing, showing that the market was bidding up the whole group rather than just one company. 

For Xanadu, the rally created an extraordinary paper windfall. Weedbrook owns 15.6% of the company through multiple voting shares, and his stake was valued at about $1.5 billion to $1.6 billion during the surge. The story is notable because the company’s valuation moved dramatically on sentiment tied to Nvidia’s broader endorsement of quantum-related tooling, not on a fresh commercial breakthrough from Xanadu itself. 

The main issue is that quantum computing remains a high-expectation, low-certainty industry. Nvidia’s move suggests that investors increasingly view AI and quantum as complementary technologies, especially if software can help make fragile quantum hardware more usable. But the volatility also highlights the risk: when a sector is still early and speculative, a single announcement can create massive gains, even before the business fundamentals fully catch up.

Tinder And Zoom Introduce World ID Iris Scanning To Verify Humans Amid Rising AI Fake Profiles

 

Now comes eye-scan tech on Tinder and Zoom, rolling out to confirm real people behind profiles amid rising fears about AI mimics and bots. This move leans on identity checks from World ID - backed by Tools for Humanity - to tell actual humans apart. Verification lights up through unique iris patterns, quietly working when someone logs in. Not every user sees it yet; testing shapes how widely it spreads. Behind the scenes, privacy safeguards aim to shield biometric data tightly. Shifts like these respond to digital trust gaps widening across social apps lately. Scanning begins at the iris, that ring of color in the eye, using either an app or a round gadget made for this purpose. After confirmation comes through, a distinct digital ID lands on the person's smartphone. 

This key travels with them, opening access wherever systems accept it to prove someone is human, not automated software. Rising floods of fake online personas built by artificial intelligence fuel efforts like this one. Impersonations crafted by deepfakes grow more common, pushing such verification into sharper focus. Backed by Sam Altman - also at the helm of OpenAI - the project made its debut in San Francisco. At the event, he suggested the web may soon be flooded with machine-made content more than human output. Truth online might hinge on tools able to tell actual humans apart from artificial ones. 

Such systems, according to him, are likely to grow unavoidable. Fake accounts plague both Tinder and Zoom, complicating trust on these platforms. Driven by artificial intelligence, counterfeit profiles on Tinder deploy synthetic photos alongside prewritten messages. These setups often unfold into romantic deception aimed at seizing cash or sensitive details. Reports indicate massive monetary damage worldwide due to similar frauds lately. Losses tally in the billions across nations within just a few years. 

Surprisingly, Zoom faces a distinct yet connected challenge - deepfake-driven impersonation at work. A well-documented incident saw fraudsters deploy synthetic audio and video to mimic corporate leaders, tricking staff into sending large sums. Here, World ID steps in, adding stronger verification when stakes run high. Later came iris scans, after Match Group already introduced video selfies to fight fake profiles on Tinder. Though not required, this newer check offers a tougher way to prove who you really are. People at the company say it helps users feel more certain about others’ real identities. 

What matters most is trust during interactions. Because irises differ so much between people, World ID uses them as a key part of its method. This setup aims to protect user privacy by creating an individual code instead of keeping sensitive data like home locations or full names. Even though it does not collect traditional identity markers, the technology still confirms real individuals. Growth has been steady, with expanding adoption seen on various digital services. 

A large number of people - already in the millions - have gone through the sign-up process. Now shaping how we confirm who's behind a screen, artificial intelligence pushes biometrics deeper into everyday applications. Though concerns linger about data safety and user acceptance, this trend mirrors wider attempts across tech sectors to tackle rising confusion between real people and sophisticated automated fakes. Despite hesitation in some areas, systems that verify physical traits gain ground as tools for clearer online identities.

Microsoft Defender “Red Sun” Flaw Raises Questions Over Antivirus Reliability and Disclosure Practices

 

Microsoft Defender Antivirus, widely used as the default protection layer for Windows systems, is facing scrutiny after a newly disclosed vulnerability suggested it may fall short in certain scenarios. Despite its role as a frontline defense against malware, recent findings indicate that the tool might not always behave as expected—and critics say Microsoft has not shown urgency in addressing the concern.

A cybersecurity researcher operating under the name Chaotic Eclipse revealed the flaw, calling it “Red Sun.” The researcher shared that a proof-of-concept (PoC) demonstrates how attackers could potentially bypass Defender’s protections. They also warned that threat actors may already be experimenting with the vulnerability.

The issue appears to originate from how Defender processes suspicious files tagged with a “cloud” marker. Under certain circumstances, the antivirus may restore or rewrite these files back to their original locations. According to the PoC, this behavior could be manipulated to overwrite critical system files, potentially allowing privilege escalation.

"I think anti-malware products are supposed to remove malicious files not be sure they are there but that's just me," remarked Chaotic Eclipse.

Earlier in the month, the same researcher disclosed another zero-day vulnerability named BlueHammer. He claimed that Microsoft Security Response Center did not consider it a major threat, prompting him to release the PoC publicly. In a follow-up discussion on Red Sun, Chaotic Eclipse said his interactions with the MSRC team have worsened, accusing Microsoft developers of unprofessional conduct.

"It was soo bad at some point I was wondering if I was dealing with a massive corporation or someone who is just having fun seeing me suffer but it seems to be a collective decision," he said.

The researcher further alleged that Microsoft’s security division has, at times, discouraged independent vulnerability reporting rather than supporting it. He also pointed to previous cases where other researchers voiced dissatisfaction with how MSRC handled their disclosures.

Despite the controversy, Red Sun is being treated as a valid security concern within the cybersecurity community. Analysts have also flagged possible real-world exploitation attempts targeting BlueHammer, Red Sun, and another vulnerability referred to as UnDefend.

Chaotic Eclipse identified the Red Sun flaw while reviewing fixes tied to CVE-2026-33825, which was addressed in Microsoft’s latest Patch Tuesday update. Additional patches may follow as further related issues come to light, even as discussions continue around Microsoft’s response to vulnerability reports.

Meanwhile, some experts suggest users consider third-party antivirus tools instead of relying solely on Microsoft Defender, though opinions differ. The researcher himself mentioned a preference for Bitdefender Antivirus Free, describing it as a lightweight solution built on a widely adopted malware detection engine.

Fake CAPTCHA Lures Power IRSF Fraud and Crypto Theft Campaigns


 

Research by Infoblox reveals a new fraud operation that combines routine web security practices with telecom billing abuse, resulting in unauthorized mobile activity by using counterfeit CAPTCHA interfaces. 

In this scheme, familiar human verification prompts are repurposed as covert triggers for International Revenue Share Fraud, effectively converting a typical browser interaction into an event that is monetized through telecom billing. 

Several studies have demonstrated that users who navigate what appears to be a legitimate verification process may unknowingly authorize premium or international SMS transmissions, creating a direct revenue stream for threat actors. 

IRSF has presented challenges to telecom operators for decades, but this implementation introduces a previously undetected delivery vector that takes advantage of user trust in widely used web validation mechanisms in order to accomplish the delivery. 

While individual charges may appear insignificant, the cumulative impacts at scale present carriers with measurable financial exposure, along with an increase in customer disputes resulting from opaque and unrecognized billing activity. 

Based on the analysis, it appears that the campaign has been operating since mid-2020, resulting from a sustained and carefully developed exploitation approach. Through the utilization of classic social engineering techniques as well as browser manipulation tactics, including back-button hijacking, the infrastructure effectively limits user navigation and reinforces the illusion of a legitimate verification process. 

In addition, dozens of originating numbers were identified in multiple international jurisdictions, emphasizing the geographical dispersion of the monetization layer underpinning the scheme. The staged CAPTCHA sequence is particularly designed to trigger multiple outbound SMS events silently, routing messages to a variety of premium-rate destinations in place of a single endpoint, thus maximizing revenue generation per interaction by triggering multiple SMS events.

A delay in the manifestation of associated charges which often occurs weeks after the event—obscures attribution further, reducing the possibility of user recalling or disputing the charges at bill time. In particular, the integration of malicious traffic distribution systems within this operation is significant, as is the repurposing of infrastructure typically utilized for malware delivery and phishing redirection into SMS fraud orchestration in a high volume. 

Threat actors can scale a campaign efficiently while maintaining operational stealth by utilizing layers of redirection and evasion mechanisms through this convergence. These findings have led to the discovery of a highly orchestrated, multi-phase fraud scheme that combines behavioral manipulation with telemarketing monetization. 

By utilizing a pool of internationally distributed numbers - many of which are registered in regions with higher SMS termination costs, including Azerbaijan, Egypt, and Myanmar - the operation maximizes per transaction yields.

It is common practice for victims to be funneled through a series of convincing CAPTCHA challenges that are intended to trigger outbound messaging events to numerous premium-rate destinations discreetly, often resulting in several SMS transmissions within the same session. This layered interaction model, strengthened by browser-level interference, such as history manipulation, prevents users from leaving the website while maintaining the illusion that the application is legitimate. 

In this fraud model, the threat actor exploits inter-carrier settlement mechanisms to route traffic toward high-fee endpoints under revenue-sharing arrangements by leveraging inter-carrier settlement mechanisms. Moreover, the integration of traffic distribution systems provides an additional level of operational precision, allowing targeted victimization while dynamically concealing malicious infrastructure from detection systems. 

Based on industry assessments, artificially inflated traffic associated with such schemes remains among the most financially damaging types of messaging abuse, as significant portions of telecom operators report both elevated traffic volumes as well as significant revenue leaks associated with such schemes. 

Individual users' seemingly trivial costs aggregate into a scalable and persistent revenue stream within this context, demonstrating the ongoing viability of IRSF to serve as a global fraud vector. Detailed investigations conducted by Infoblox and Confiant further illustrate how Keitaro Tracker abuse has enabled large-scale fraud ecosystems by acting as an enabler.

It was originally designed as a self-hosted ad performance tracking tool, but its conditional routing capabilities have been systematically repurposed by threat actors, who often operate with illegally obtained or cracked licenses, as a covert traffic distribution system and cloaking tool. By misusing this information, victims are diverted from seemingly legitimate entry points, such as sponsored social media advertisements, to fraudulent investment platforms claiming to be AI-driven and guaranteed high returns. 

As a method of enhancing credibility and engagement, campaigns frequently employ fabricated media narratives, including spoofed news coverage, synthetic endorsements, and deepfake video content attributed to actors such as FaiKast. In a four-month observation period, telemetry indicates more than 120 discrete campaigns were deployed in conjunction with Keitaro-linked infrastructure, resulting in significant DNS activity across thousands of domains. 

The majority of this traffic has been attributed to cryptocurrency-related fraud, particularly wallet draining schemes disguised as promotional airdrops involving widely recognized blockchain services and assets. 

The convergence of legacy investment scam tactics with adaptive traffic orchestration and artificial intelligence-based deception techniques demonstrates how scalable infrastructure is intertwined with persuasive social engineering to ensure maximum reach and financial extraction in an evolving threat landscape.

In terms of execution, the scheme contains carefully optimized conversion funnels that maximize engagement as well as monetization. The typical interaction sequence, which consists of multiple CAPTCHA stages, can result in as many as 60 outbound SMS messages to a distributed network of international phone numbers, resulting in an additional charge of around $30 per session for each outbound SMS message. 

Although this cost model is modest when considered individually, it scales well across large victim pools when replicated, especially in countries with high- and mid-level termination rates across Europe and Eurasia. It is possible to further refine the campaign logic through client-side state management, which uses cookies, which track progression metrics such as “successRate” and dynamically determine user pathways.

By selectively advancing, redirecting, or filtering participants into parallel fraud streams, adaptive routing improves targeting precision while fragmenting detection efforts by distributing traffic among multiple controlled endpoints, which increases detection efficiency. 

Additionally, browser manipulation techniques, specifically JavaScript-driven history tampering, continue to be used, thereby ensuring persistence by redirecting users back into the fraudulent flow upon attempt to exit through standard navigation controls. 

As a result, the user is faced with a constrained browsing environment that prolongs interaction time and increases the possibility of repeating chargeable events before disengaging. Overall, the operation illustrates a shift in fraud engineering as telecom exploitation, adaptive web scripting, and traffic orchestration are converged into a unified, revenue-generating system. 

By embedding monetization triggers within seemingly benign user interactions, and by reinforcing those triggers with persistence mechanisms, such as cookie-driven logic and navigation controls, threat actors are successfully industrializing high volume, low value fraud. According to Information Blox, these campaigns are not only technically sophisticated, but also exploit systemic gaps in web platforms, advertising networks, and telecom billing frameworks. 

Increasingly, these tactics have become more sophisticated, and they require more coordinated mitigation in addition to detection, so tighter controls across digital advertising supply chains, improved browser-level safeguards, and greater transparency regarding cross-border messaging charges will be required to limit the scaleability of such abuses.

Can AI Own Its Work? A Debate That Started With a Monkey Photo

 



A single photograph captured in a remote forest over a decade ago has become central to one of the most complex legal questions of the digital age: what happens when creative work is produced without direct human authorship? The answer now carries long-term consequences for artificial intelligence, creative industries, and ownership rights in the modern world.

The image in question originated in 2011, when wildlife photographer David Slater was documenting crested black macaques in Indonesia. These monkeys are not only endangered but also known for their highly expressive faces, making them attractive subjects for photography. However, Slater faced difficulty capturing close-up shots because the animals were wary of human presence.

To work around this, he positioned his camera on a tripod, enabled automatic focus, and used a flash, allowing the monkeys to approach and interact with the equipment without feeling threatened. His approach relied on curiosity rather than control. Eventually, one macaque handled the camera and pressed the shutter button while looking directly into the lens. The resulting image, widely known as the “monkey selfie,” appeared almost intentional, with the animal’s expression resembling a posed portrait.

While the photograph initially brought attention and recognition, it soon triggered an unexpected legal dispute. The core issue was deceptively simple: if a photograph is not taken by a human, can anyone claim ownership over it?

The situation escalated when the image was uploaded to Wikipedia, making it freely accessible worldwide. Slater objected to this distribution, arguing that he had lost approximately £10,000 in potential earnings because the image could now be used without payment. However, the Wikimedia Foundation refused to remove the photograph. Its reasoning was based on copyright law, which generally requires a human creator. Since the image was captured by an animal, the organisation classified it as public domain material.

This interpretation was later reinforced by the U.S. Copyright Office, which formally clarified that works produced without human authorship cannot be registered. In its guidance, the office explicitly listed a photograph taken by a monkey as an example of ineligible material, establishing a clear precedent.

The dispute took another unusual turn when People for the Ethical Treatment of Animals filed a lawsuit attempting to assign copyright ownership to the macaque itself. Although framed as a legal claim over the photograph, the case was widely interpreted as an effort to establish broader legal rights for animals. After several years of legal proceedings, a court dismissed the case, concluding that animals do not have the legal capacity to initiate lawsuits.

Legal experts later observed that, although the case focused on animal authorship, it introduced a broader conceptual challenge that would become more relevant with the rise of artificial intelligence. According to intellectual property lawyer Ryan Abbott, the debate could easily extend beyond animals to machines capable of producing creative outputs.

This possibility became reality when computer scientist Stephen Thaler attempted to secure copyright protection for an image generated by his AI system, DABUS. Thaler described the system as capable of independently producing ideas, arguing that it should be recognised as the sole creator of its output. He characterised the system as exhibiting a form of machine-based cognition, though this view is strongly disputed within the scientific community.

Despite these claims, the Copyright Office rejected the application, applying the same reasoning used in the monkey selfie case. Because the work was not created by a human, it could not qualify for copyright protection. This rejection led to a legal challenge that progressed through multiple levels of the U.S. judicial system.

When the case reached the Supreme Court of the United States, the court declined to hear it, leaving lower court rulings intact. The outcome effectively confirmed that, under current U.S. law, works generated entirely by artificial intelligence cannot be owned by anyone, including the developer of the system or the individual who prompted it.

This position has reverberating implications for the creative economy. Copyright law exists to allow creators and organisations to control and monetise their work. Without ownership rights, it becomes difficult to build sustainable business models around fully AI-generated content. Legal scholar Stacey Dogan noted that this limitation reduces the likelihood of a future where machine-generated content completely replaces human-created media.

At the same time, the rapid expansion of generative AI tools continues to complicate the landscape. These systems function by analysing large datasets and producing outputs based on user instructions, often referred to as prompts. While they can generate text, images, and video at scale, their outputs raise questions about originality and authorship, particularly when human involvement is minimal.

Recent industry developments illustrate this uncertainty. Experimental AI-generated content has attracted large audiences online, suggesting a level of public interest, even if motivations such as novelty or criticism play a role. However, some technology companies have begun reassessing their AI content strategies, particularly where ownership and profitability remain unclear.

Expert opinion on the value of fully AI-generated content remains divided. Some specialists argue that such content lacks depth or authenticity, while others view AI as a useful tool for supporting human creativity rather than replacing it. This perspective positions AI as a collaborator rather than an independent creator.

Legal approaches also vary internationally. In the United Kingdom, copyright law allows ownership of computer-generated works by assigning authorship to the individual responsible for arranging their creation. However, this framework is currently being reconsidered as policymakers evaluate whether it remains appropriate in the context of modern AI systems.

One of the most complex unresolved issues involves hybrid creation. When humans actively guide, refine, and edit AI-generated outputs, determining ownership becomes less straightforward. A notable example involves an AI-assisted artwork that won a competition after extensive prompting and editing, raising questions about how much human contribution is required for copyright protection.

This debate is not entirely new. When photography first emerged, similar concerns were raised about whether cameras, rather than humans, were responsible for creative output. Over time, legal systems adapted by recognising the role of human intention and decision-making. Artificial intelligence now presents a more advanced version of that same challenge.

For now, the legal position in the United States remains clear: without meaningful human involvement, creative works cannot be protected by copyright. However, as AI becomes increasingly integrated into creative processes, the distinction between human and machine contribution is becoming more difficult to define.

What began as an unexpected interaction between a monkey and a camera has therefore evolved into a defining case in the global conversation about creativity, ownership, and technology. The decisions made in courts today will shape how creative work is produced, distributed, and valued in the future.



PhantomCore Exploits TrueConf Flaws to Breach Russian Networks

 

A pro-Ukrainian hacktivist group known as PhantomCore has been exploiting vulnerabilities in TrueConf video conferencing software to infiltrate Russian networks since September 2025. According to a Positive Technologies report, the attackers chained three undisclosed flaws in TrueConf Server, allowing them to bypass authentication, read sensitive files, and execute arbitrary commands remotely. Despite patches being released by TrueConf on August 27, 2025, the group independently reverse-engineered these issues, launching widespread attacks on Russian organizations without relying on public exploits. 

The vulnerabilities include BDU:2025-10114 (CVSS 7.5), an insufficient access control flaw enabling unauthenticated requests to admin endpoints like /admin/*; BDU:2025-10115 (CVSS 7.5), which permits arbitrary file reads; and the critical BDU:2025-10116 (CVSS 9.8), a command injection vulnerability for full OS command execution. This exploit chain grants attackers initial foothold on vulnerable servers, facilitating lateral movement and persistence within victim environments. 

PhantomCore's operations highlight their sophistication, as they maintain stealth for extended periods—up to 78 days in some cases—while targeting sectors like government, defense, and manufacturing. PhantomCore's tactics extend beyond TrueConf exploits, incorporating phishing with password-protected RAR archives containing PhantomRAT malware, a shift from earlier ZIP-based methods. Positive Technologies noted over 180 infections from May to July 2025 alone, peaking on June 30, with at least 49 hosts still under attacker control as of early 2026. The group's pro-Ukrainian affiliation aligns with geopolitical motives, focusing exclusively on Russian entities amid ongoing cyber-espionage waves. 

Organizations running TrueConf face heightened risks if unpatched, as attackers evolve tools to evade detection and conduct large-scale breaches. Immediate mitigations include applying the August 2025 patches, monitoring admin endpoints and command logs for anomalies, and segmenting video conferencing servers from core networks. Enhanced defenses against lateral movement, such as network micro-segmentation and behavioral analytics, are crucial to counter PhantomCore's persistence. 

This campaign underscores the dangers of unpatched collaboration tools in sensitive environments, where private zero-days can fuel nation-aligned hacktivism. Russian firms must prioritize vulnerability management and threat hunting, as PhantomCore's adaptability signals ongoing threats into 2026. By staying vigilant, defenders can disrupt such stealthy intrusions before they escalate to data exfiltration or sabotage.

ShinyHunters Targets McGraw Hill In Salesforce Data Leak Dispute Over Breach Scope

 

A breach at McGraw Hill came to light when details appeared on a leak page run by ShinyHunters, a hacking collective now seeking payment. Appearing online without warning, the listing suggested sensitive data had been taken. The firm acknowledged something went wrong only after outsiders pointed to the published claims. Instead of silence, there followed a brief statement - no elaborate explanations, just confirmation. What exactly was accessed remains partly unclear, though the criminals promise more leaks if demands go unmet. Their method? Take data first, then pressure victims publicly through exposure. 

Though the collective says it pulled around 45 million records from Salesforce setups, McGraw Hill challenges how serious the incident really was. A flaw in a cloud-based Salesforce setup - misconfigured, not hacked - led to what occurred, according to the company. Public release looms unless money changes hands by their stated date. Not a breach of core infrastructure, they clarify. Timing hinges on whether terms get fulfilled. What surfaced came via access error, not forced entry. 

Later came confirmation from the firm: only minor data sat exposed through a public page tied to Salesforce. Not part of deeper networks - systems handling daily operations stayed untouched. Customer records? Still secure. Educational material platforms? Unreached. Personal identifiers like income traces or school files showed no signs of exposure. The breach never reached those layers. A single weak link elsewhere might open doors wider than expected. Problems often start outside core networks, hidden in connected tools. 

One misstep in setup could ripple across several teams relying on Salesforce. When outside systems slip, sensitive details sometimes follow. Security gaps far from the main system still carry risk close to home. What seems distant can quickly become immediate. Even with those reassurances, ShinyHunters insists the breached records include personal details - setting their version against the firm’s own review. Contradictions like this often surface when attacks aim to extort, as hackers sometimes inflate what they took to push targets into responding. 

Now operating at a steady pace, ShinyHunters stands out within the underground scene by focusing less on locking files and more on quietly siphoning information. Instead of scrambling networks, they pressure victims using material already taken - payment demands follow exposure threats. Their name surfaced after breaches hit well-known companies, where leaked datasets served as leverage. Rather than causing immediate downtime, their power lies in what could be revealed. 

What stands out lately is how this group exploited a security gap at Anodet, an analytics company, gaining entry through leaked access tokens aimed squarely at cloud-based data systems. Alongside that incident came the public drop of massive corporate datasets - another sign their main goal remains pulling vast amounts of information from high-profile targets. Among recent breaches, the one involving McGraw Hill stands out - not because of its scale, but due to how it reveals weaknesses hidden within standard cloud setups. 

Instead of breaking through strong defenses, hackers often slip in via small errors made during setup steps handled by outside teams. What makes this case notable is less about immediate damage, more about what follows: sensitive information pulled quietly into unauthorized hands. While systems keep running without interruption, stolen data becomes the weapon - threatening public release unless demands are met. 

Over time, such tactics have shifted the focus of digital attacks away from crashes toward silent leaks. With probes still underway, one thing becomes clear: oversight of outside connections matters more now than ever. When digital intruders challenge what companies say, credibility hinges on openness. Tight rules around setup adjustments help reduce weak spots. How firms handle disclosures can shape public trust just as much as technical fixes. Clarity during crises often separates measured responses from confusion.

Hackers List 8.3 Million U.S. Crime Tip Records for $10,000, Raising Major Security Concerns

 

Hackers responsible for stealing 8.3 million crime tip records are now attempting to sell the dataset for $10,000 in cryptocurrency, escalating concerns around one of the largest breaches involving sensitive law enforcement information.

The compromised data includes confidential crime tips submitted to hundreds of Crime Stoppers programs run by law enforcement agencies across the United States. It also extends to submissions made to certain branches of the U.S. military and even educational institutions.

The sale offer, posted on a cybercrime forum, highlights the serious implications of the breach involving cloud-based intelligence firm P3 Global Intel. The leaked database reportedly contains extensive personal information about individuals identified in tips, including names, email addresses, dates of birth, phone numbers, home addresses, license plate details, Social Security numbers, and criminal histories. In some cases, it also reveals identities and details of informants, potentially putting them at risk of retaliation.

Cybersecurity experts had earlier warned that the breach could also pose national security risks, given that some of the exposed tips were submitted to federal agencies and the military.

The dataset was originally stolen late last year by a hacker group known as INTERNET YIFF MACHINE and later shared with Straight Arrow News and the nonprofit transparency group Distributed Denial of Secrets (DDoSecrets). The collection, referred to as BlueLeaks 2.0, spans records from February 1987 through November 2025.

In a statement, a member of the hacking group confirmed their involvement in listing the data for sale, expressing reluctance over the decision.

“It’s truly not something I want to do and it goes against my principles,” the hacker said. “However, it was out of necessity. Principles are for the well-fed, and I’m unfortunately not in a great place.”

The hacker also indicated that there is already interest from potential buyers, some of whom may have malicious intent.

“I assume this will likely attract customers related to fraud, extortion, or at worst, finding and targeting informants,” they said. “Again, this isn’t something I feel good about doing, but it’s necessary.”

They added that the intention is to sell the dataset to a single buyer.

Mailyn Fidler, assistant professor at the University of New Hampshire Franklin School of Law specializing in cybersecurity and cybercrime, warned that exposure of such data could lead to “severe harm and even death to police informants.”

P3 Global Intel’s parent company, Navigate360, has not responded to inquiries regarding the attempted sale. Earlier, CEO JP Guilbault stated that a third-party forensic investigation was underway to determine the extent of any breach.

“To this point, we have not confirmed that any sensitive information has been accessed or misused,” Guilbault said at the time.

The company has not issued further updates, and its services continue to operate. However, some users have taken precautionary measures. For instance, the Portland Police Bureau in Oregon recently advised the public to temporarily refrain from submitting tips through its Crime Stoppers program due to the ongoing concerns.

The Shift from Cyber Defense to Recovery-Driven Security


 

There has been a structural recalibration of cybersecurity strategies as organizations recognize that breaches impact operations, finances, and reputation in ways that extend far beyond the moment of intrusion. 

Incidents that once remained within the domain of IT are now affecting the entire organization, with containment cycles lasting up to months and remediation costs reaching tens of millions for large-scale breaches. 

Leaders in response are shifting their focus from absolute prevention to sustained operational continuity, recognizing that resilience is not defined by the absence of attacks, but rather by the capability of recovering quickly and precisely. 

The shift is driving a renewed focus on creating integrated cyber resilience frameworks that align business continuity objectives with security controls, ensuring critical systems remain recoverable even after active compromises. There is also a disconnect between security enforcement and operational accessibility resulting from this evolution. 

The cybersecurity function has historically prioritized perimeter hardening and strict authentication, whereas business operations demand uninterrupted data availability with minimal friction to operate. With increasing threat landscapes and competing priorities, these priorities are convergent, often revealing inefficiencies, in which layered authentication mechanisms, while indispensable, inadvertently delay recovery workflows and extend downtime during critical incidents.

By integrating adaptive intelligence and automation into Zero Trust architectures, this divide is beginning to be reconciled. The approach organizations are taking is to design environments where continuous verification is co-existing with streamlined restoration capabilities rather than treating security and recovery as opposing forces. 

Zero Trust, at its core, is a strategic model rather than a single technology that requires rigorous, context-aware authentication utilizing multiple data points prior to granting access. In combination with intelligent recovery systems, this approach is redefining resilience by enabling secure access without compromising recovery agility, resulting in high-assurance environments that are able to maintain operations even under persistent threat circumstances. 

With the increased sophistication of ransomware campaigns, conventional backup-centric strategies are revealing their limitations, as adversaries increasingly design attacks that extend beyond the initial system compromises. Threat actors execute long reconnaissance phases during many incidents, mapping enterprise environments, identifying high-value assets, and, critically, locating backups and undermining them before encrypting or destroying data.

By intentionally targeting a variety of entities, cybercrime has evolved into a coordinated and enterprise-like environment where operational disruption is designed to maximize leverage. Attackers effectively eliminate an organization's ability to restore from trusted states when they compromise recovery pathways, amplifying downtime and causing an increase in financial and regulatory risk. 

Due to this inevitability, forward-looking organizations are repositioning their security postures to reflect this inevitability, incorporating defensive controls into a more holistic security model that includes assured recoverability. As part of this approach, cyber resilience and cyber recovery are integrated, where the objective is to not only withstand intrusion attempts but to maintain data integrity, availability, and rapid restoration under adversarial circumstances. 

The modern cyber recovery architectures are reflecting these evolving threat dynamics by incorporating resilience as an integral part of their development, repositioning data protection from a passive safeguard to an active line of defense. Hardened recovery frameworks are becoming increasingly popular among organizations, which include air-gapped vaulting and immutable storage, in order to ensure backup data is not susceptible to adversarial manipulation while enabling integrity validation before restoration through advanced malware scanning. 

A controlled virtual environment is used to test recovery processes isolated from one another, along with point-in-time restoration capabilities that are capable of restoring systems back to a known, uncompromised state with minimal operational disruptions as a complement to this. 

Separate recovery enclaves are also crucial to preventing lateral movement and credential-based compromise, as backup infrastructure is decoupled from production networks, thus eliminating lateral movement pathways. This architecture ensures that security and compliance requirements are not treated as an afterthought but are integrally integrated, supported by comprehensive audit trails, tagging of data, and a verifiable chain of custody. These capabilities together provide organizations with a structured, audit-ready recovery posture that maintains business continuity, even under sustained cyber pressure, a transition from reactive incident response.

In an effort to maintain continuous visibility into backup repository integrity and behavior, organizations are extending the focus beyond safeguarding backup repositories in their resilience frameworks. There is an increasing trend among threat actors to employ persistence-driven techniques that alter backup configurations or introduce incremental data corruption to erode reliable recovery points over time—often without triggering immediate alerts. 

Unless granular monitoring is employed, manipulations of this kind can be undetected until the recovery process has been initiated, at which point recovery pathways may already be compromised. It is for this reason that enterprises are integrating advanced telemetry, behavioral analytics, and anomaly detection in backup ecosystems, enabling early detection of irregular access patterns, unauthorized configuration changes, and deviations in data consistency. 

By enhancing proactive visibility, enterprises can not only respond more quickly to incidents but also prevent adversaries from dismantling recovery capabilities silently. Rapid recovery is of little value if latent threats are reintroduced into production environments. 

Furthermore, it is important to ensure that recovered data is intact and uncompromised. In this regard, organizations are integrating validation layers, such as isolated forensic sandboxes and automated recovery testing, to verify backup integrity well in advance of a loss. 

By implementing a comprehensive architectural shift in which recovery is engineered as a fundamental capability instead of a reactive measure, enterprises are positioned to sustain operations with minimal disruption by embedding immutability, isolation, continuous monitoring, and trusted validation into data protection strategies from conception. 

Consequently, resilience is no longer based on the ability to evade every attack, but rather on the ability to restore systems as quickly and precisely as possible, especially when defenses have been breached inevitably. Cybersecurity effectiveness is no longer defined by absolute prevention, but rather by the assurance that controlled, reliable recovery can be achieved under adverse circumstances. 

A growing number of adversaries continue to develop techniques that bypass traditional defenses and target recovery mechanisms themselves, forcing organizations to adopt a design philosophy based on the expectation of compromise rather than treating compromise as an exception. 

In order to maintain operational continuity, it is imperative that security postures, continuous monitoring, and resilient recovery architectures are integrated cohesively. In order to mitigate the cascading impact of cyber incidents, enterprises should align detection capabilities with verified restoration processes and embed trust throughout the recovery lifecycle. 

The key to establishing resilience is not eliminating risk, but rather abiding by its ability to absorb disruption, restore critical systems with integrity, and sustain business operations without interruption in a world where cyber incidents have become an operational certainty rather than simply a possibility.