Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

FBI Escalates Enforcement Against Thai Fraud Rings Targeting US Individualsa

  Digital exchanges that begin with a polite greeting, an apparent genuine conversation, or a quiet offer of companionship increasingly beco...

All the recent news you need to know

AI Agents Are Reshaping Cyber Threats, Making Traditional Kill Chains Less Relevant

 



In September 2025, Anthropic disclosed a case that highlights a major evolution in cyber operations. A state-backed threat actor leveraged an AI-powered coding agent to conduct an automated cyber espionage campaign targeting 30 organizations globally. What stands out is the level of autonomy involved. The AI system independently handled approximately 80 to 90 percent of the tactical workload, including scanning targets, generating exploit code, and attempting lateral movement across systems at machine speed.

While this development is alarming, a more critical risk is emerging. Attackers may no longer need to progress through traditional stages of intrusion. Instead, they can compromise an AI agent already embedded within an organization’s environment. Such agents operate with pre-approved access, established permissions, and a legitimate role that allows them to move across systems as part of daily operations. This removes the need for attackers to build access step by step.


A Security Model Designed for Human Attackers

The widely used cyber kill chain framework, introduced by Lockheed Martin in 2011, was built on the assumption that attackers must gradually work their way into a system. It describes how adversaries move from an initial breach to achieving their final objective.

The model is based on a straightforward principle. Attackers must complete a sequence of steps, and defenders can interrupt them at any stage. Each step increases the likelihood of detection.

A typical attack path includes several phases. It begins with initial access, often achieved by exploiting a vulnerability. The attacker then establishes persistence while avoiding detection mechanisms. This is followed by reconnaissance to understand the system environment. Next comes lateral movement to reach valuable assets, along with privilege escalation when higher levels of access are required. The final stage involves data exfiltration while bypassing data loss prevention controls.

Each of these stages creates opportunities for detection. Endpoint security tools may identify the initial payload, network monitoring systems can detect unusual movement across systems, identity solutions may flag suspicious privilege escalation, and SIEM platforms can correlate anomalies across different environments.

Even advanced threat groups such as APT29 and LUCR-3 invest heavily in avoiding detection. They often spend weeks operating within systems, relying on legitimate tools and blending into normal traffic patterns. Despite these efforts, they still leave behind subtle indicators, including unusual login locations, irregular access behavior, and small deviations from established baselines. These traces are precisely what modern detection systems are designed to identify.

However, this model does not apply effectively to AI-driven activity.


What AI Agents Already Possess

AI agents function very differently from human users. They operate continuously, interact across multiple systems, and routinely move data between applications as part of their designed workflows. For example, an agent may pull data from Salesforce, send updates through Slack, synchronize files with Google Drive, and interact with ServiceNow systems.

Because of these responsibilities, such agents are often granted extensive permissions during deployment, sometimes including administrative-level access across multiple platforms. They also maintain detailed activity histories, which effectively act as a map of where data is stored and how it flows across systems.

If an attacker compromises such an agent, they immediately gain access to all of these capabilities. This includes visibility into the environment, access to connected systems, and permission to move data across platforms. Importantly, they also gain a legitimate operational cover, since the agent is expected to perform these actions.

As a result, the attacker bypasses every stage of the traditional kill chain. There is no need for reconnaissance, lateral movement, or privilege escalation in a detectable form, because the agent already performs these functions. In this scenario, the agent itself effectively becomes the entire attack chain.


Evidence That the Threat Is Already Looming 

This risk is not theoretical. The OpenClaw incident provides a clear example. Investigations revealed that approximately 12 percent of the skills available in its public marketplace were malicious. In addition, a critical remote code execution vulnerability enabled attackers to compromise systems with minimal effort. More than 21,000 instances of the platform were found to be publicly exposed.

Once compromised, these agents were capable of accessing integrated services such as Slack and Google Workspace. This included retrieving messages, documents, and emails, while also maintaining persistent memory across sessions.

The primary challenge for defenders is that most security tools are designed to detect abnormal behavior. When attackers operate through an AI agent’s existing workflows, their actions appear normal. The agent continues accessing the same systems, transferring similar data, and operating within expected timeframes. This creates a significant detection gap.


How Visibility Solutions Address the Problem

Defending against this type of threat begins with visibility. Organizations must identify all AI agents operating within their environments, including embedded features, third-party integrations, and unauthorized shadow AI tools.

Solutions such as Reco are designed to address this challenge. These platforms can discover all AI agents interacting within a SaaS ecosystem and map how they connect across applications.

They provide detailed visibility into which systems each agent interacts with, what permissions it holds, and what data it can access. This includes visualizing SaaS-to-SaaS connections and identifying risky integration patterns, including those formed through MCP, OAuth, or API-based connections. These integrations can create “toxic combinations,” where agents unintentionally bridge systems in ways that no single application owner would normally approve.

Such tools also help identify high-risk agents by evaluating factors such as permission scope, cross-system access, and data sensitivity. Agents associated with increased risk are flagged, allowing organizations to prioritize mitigation.

In addition, these platforms support enforcing least-privilege access through identity and access governance controls. This limits the potential impact if an agent is compromised.

They also incorporate behavioral monitoring techniques, applying identity-centric analysis to AI agents in the same way as human users. This allows detection systems to distinguish between normal automated activity and suspicious deviations in real time.


What This Means for Security Teams

The traditional kill chain model is based on the assumption that attackers must gradually build access. AI agents fundamentally disrupt this assumption.

A single compromised agent can provide immediate access to systems, detailed knowledge of the environment, extensive permissions, and a legitimate channel for moving data. All of this can occur without triggering traditional indicators of compromise.

Security teams that focus only on detecting human attacker behavior risk overlooking this emerging threat. Attackers operating through AI agents can remain hidden within normal operational activity.

As AI adoption continues to expand, it is increasingly likely that such agents will become targets. In this context, visibility becomes critical. The ability to monitor AI agents and understand their behavior can determine whether a threat is identified early or only discovered during incident response.

Solutions like Reco aim to provide this visibility across SaaS environments, enabling organizations to detect and manage risks associated with AI-driven systems more effectively.

Mazda Reports Limited Data Exposure After Warehouse System Breach

 

Early reports indicate Mazda Motor Corporation faced a data leak following suspicious activity uncovered in its systems during December 2025. Information belonging to staff members, along with details tied to external partners, became accessible due to the intrusion. Investigation results point to a weak spot found within software managing storage logistics. This particular setup supports component sourcing tasks based in Thailand. Findings suggest the flaw allowed outside parties to enter without permission. 

Despite early concerns, investigators confirmed the breach touched only internal systems - no client details were involved. A count later showed 692 records may have been seen by unauthorized parties. Among what was accessed: login codes, complete names, work emails, firm titles, along with tags tied to collaboration networks. What escaped exposure? Anything directly linked to customers. 

After finding the issue, Mazda notified Japan’s privacy regulator while launching a probe alongside outside experts focused on digital security. So far, no signs have appeared showing the leaked details were exploited. Still, people touched by the event are being urged to watch closely for suspicious messages or fraud risks tied to the breach. Despite limited findings now, caution remains key given how personal information might be used later.  

Mazda moved quickly, rolling out several upgrades to protect its digital infrastructure. With tighter controls on who can enter systems, fewer services exposed online now limit entry points. Patches went live where needed most, closing known gaps before they could be used. Monitoring grew sharper, tuned to catch odd behavior faster than before. Each change connects to a clear goal - keeping past problems from repeating. Protection improves not by one fix but through layers put in place over time. 

Mazda pointed out the breach showed no signs of ransomware or malicious software, yet operations remain unaffected. Though certain hacking collectives once said they attacked Mazda’s networks, the firm clarified this event holds no connection - no communication from any threat actor occurred. 

Now more than ever, protection across suppliers and daily operations demands attention - the car company keeps watch, adjusts defenses continuously. Emerging risks push updates to digital safeguards forward steadily.

LeakNet Ransomware Uses ClickFix and Deno for Stealthy Attacks

 

LeakNet ransomware has changed its approach by pairing ClickFix social-engineering lures with a Deno-based loader, making its intrusion chain harder to spot. The group is using compromised websites to trick users into running malicious commands, then executing payloads in memory to reduce obvious traces on disk. 

Security researchers say this is a notable shift because ClickFix replaces older access methods like stolen credentials with a user-triggered infection path. Once the victim interacts with the fake prompt, scripts such as PowerShell and VBS can launch the next stage, often with misleading file names that look routine rather than malicious. 

The Deno runtime is the second major piece of the campaign. Deno is a legitimate JavaScript and TypeScript runtime, but LeakNet is abusing it in a “bring your own runtime” style so it can run Base64-encoded code directly in memory, fingerprint the host, contact command-and-control servers, and repeatedly fetch additional code. 

That design helps the attackers stay stealthy because it minimizes the amount of malware written to disk and can blend in with normal software activity better than a custom loader might. Researchers also note that LeakNet is building a repeatable post-exploitation flow that can include lateral movement, payload staging, and eventually ransomware deployment. 

For organizations, the primary threat is that traditional file-based detection may miss the earliest stages of the attack. A campaign that starts with a convincing browser prompt or a fake verification page can quickly turn into an internal breach if users are not trained to question unexpected instructions. 

Safety recommendations 

To mitigate threat, companies should train users to avoid following browser-based “fix” prompts, especially on unfamiliar or compromised sites. They should also restrict PowerShell, VBS, and other script interpreters where possible, monitor for Deno running outside developer workflows, watch for unusual PsExec or DLL sideloading activity, and segment networks so one compromised host cannot easily spread access. Finally, maintain tested offline backups and keep a playbook for rapid isolation, because fast containment is often the difference between a blocked intrusion and a full ransomware incident.

24.5 Million Dollar Hack Exposes Vulnerabilities in Resolv DeFi


 

The concept of stability is fundamental to the architecture of decentralized finance - it is the foundation upon which trust is built. A stablecoin brings parity with the dollar to the decentralized finance system, providing a quiet assurance that one token will reliably mirror one unit of currency. 

The premise of this proposition has been severely undercut with the case of Resolv, where the USR token now trades at less than a third of its intended peg and hovers around 27 cents, clearly demonstrating a structural breakdown that cannot be rectified by simple recalibration. 

During the early hours of Sunday morning, at approximately 2:21 a.m. UTC, an attacker exploited a vulnerability within the protocol's minting contract, fabricating nearly 80 million tokens without backing. A swift and systematic unwinding of value followed-those artificially created assets were funneled through decentralized exchanges, exchanged for more liquid stablecoins, and eventually consolidated into Ether. 

After completing the activity, the attacker had obtained digital assets worth approximately $25 million, leaving behind not only a depegged token, but also a stark reminder of how confidence can rapidly erode when mathematical foundations of financial systems fail to hold up. It is evident from the mechanics of the breach that there was a deeper architectural weakness rather than a momentary lapse that led to the breach. 

A capital injection of $100,000 to $200,000 in USDC was sufficient to engage the protocol's minting interface under normal conditions at the beginning of the sequence. However, what occurred afterward diverged significantly from what was expected. By exploiting a flaw in the authorization flow, the adversary was able to generate approximately 80 million USR tokens, a number that is significantly greater than the initial collateral provided. 

Ultimately, this breakdown occurred as a result of an off-chain signing service entrusted with a privileged private key that authorised the minting of mint quantities. The contract verified the presence of a valid cryptographic signature, but failed to impose any intrinsic ceiling on issuance. Therefore, a critical control was externalized without being enforced on the blockchain. 

Having created the unbacked tokens, the attacker moved with calculated precision to convert USR into its staked derivative, wstUSR, and unwind the position using decentralized liquidity pools. Upon incremental exchange of the assets for stablecoins and then consolidation of Ether, the proceeds could be absorbed into deeper market liquidity, thereby providing a greater level of market liquidity. 

Parallel to the sudden injection of uncollateralized supply, USR's market equilibrium was destabilized, resulting in a rapid depreciation of almost 80 percent. As a result of establishing the sequence of events, the incident demonstrates the importance of investigating the minting architecture and implicit trust assumptions that enabled such a breach to occur.

Rather than limiting themselves to Resolv's immediate ecosystem, the repercussions of the exploit have been emitted across interconnected DeFi infrastructure protocols. A detailed internal assessment has now been initiated to determine the extent of exposure for organizations that integrated USR into shared liquidity pools, accepted it as collateral, or relied on its yield mechanisms. 

Decentralized finance is based on the premise that it can be layered, enhancing efficiency as well as reducing risk, and this chain reaction is indicative of this. As a result of the sudden depegging of USR, platforms upstream have encountered balance sheet inconsistencies. 

As a precautionary measure, select operations were suspended, withdrawals and deposits were restricted, and governance-driven responses were initiated to mitigate potential deficits. This requires a more detailed audit of smart contract states and liquidity positions to reconcile the impact of a compromised asset than surface-level accounting.

As a result of the episode, DeFi remains aware of a persistent structural reality: vulnerabilities at a foundational layer can lead to instability throughout the entire stack, thereby exposing even indirectly exposed participants to disruption. There has been an increase in attention on the post-exploit environment, where the trajectory of stolen assets may influence recovery prospects. 

On-chain observations indicate that the majority of the approximately $25 million extracted remains consolidated within wallets controlled by the attacker, with no visible signs of obfuscation by mixing or crossing chains. It has historically been observed that such inactivity precedes negotiation attempts, as demonstrated in prior incidents involving attackers engaging with protocol teams under whitehat or quasi-whitehat frameworks to return funds in exchange for incentives. 

In addition to unclear whether Resolv's operators have initiated similar outreach or structured a formal bounty, no confirmation regarding direct communication with the attacker has been released to date. While blockchain analytics firms are actively tracing transaction flows, no parallel involvement by law enforcement agencies has been reported. 

Near-term, the focus is on transparency and remediation for affected users and counterpart protocols monitoring official disclosures, evaluating exposure statements, and waiting for comprehensive post-incident analyses along with compensation frameworks. 

Decentralized finance continues to gain momentum as it moves toward broader adoption; however, the incident once again illustrates that there is still a significant gap between innovation and security assurance in systems where trust is distributed but accountability can become muddled.

A number of factors contribute to the shift in focus from attribution to prevention in the aftermath of the incident, underlining the need for more resilient design principles across decentralized systems. Consequently, security in DeFi cannot be partially delegated to off-chain mechanisms or implicit trust models; critical controls must be enforced at the protocol level by ensuring deterministic safeguards, limiting minting logic, and continuously validating changes to the state. 

During this conference, protocol architects and developers are reminded of the importance of minimizing privileged dependencies, implementing rigorous audit layers, and stress testing composability risks under adversarial conditions. 

Participants are reminded that it is imperative that not only yield opportunities are evaluated, but that underlying mechanisms are also examined for structural integrity. It is expected that sustained credibility will be dependent less on the speed at which innovations are implemented, and more on the discipline with which security assumptions are developed, verified, and communicated transparently.

“Unhackable” No More: Researcher Demonstrates Hardware-Level Exploit on Xbox One







For years, the Xbox One was widely viewed as one of the few gaming systems that had resisted successful hacking. That perception has now changed after a new hardware-based attack method was publicly demonstrated.

At the RE//verse 2026 event, security researcher Markus Gaasedelen introduced a technique called the “Bliss” double glitch. This method relies on manipulating electrical voltage at precise moments to interfere with the console’s startup process, effectively bypassing its built-in protections.

This marks the first known instance where the Xbox One’s hardware defenses have been broken in a way that others can replicate. The achievement is being compared to the Reset Glitch Hack that affected the Xbox 360, although this newer approach operates at a deeper level. Instead of targeting software vulnerabilities, it directly interferes with the boot ROM, a core component embedded in the console’s chip. By doing so, the exploit grants complete control over the system, including its most secure layers such as the hypervisor.

When the Xbox One was introduced in 2013, Microsoft designed it with an unusually strong security model. The system relied on multiple layers of encryption and authentication, linking firmware, the operating system, and game files into a tightly controlled verification chain. Within the company, it was even described as one of the most secure products Microsoft had ever built.

A substantial part of this design was its secure boot process. Unlike the Xbox 360, which was compromised through reset-line manipulation, the Xbox One removed such external entry points. It also incorporated a dedicated ARM-based security processor responsible for verifying every stage of the startup sequence. Without valid cryptographic signatures, no code was allowed to run. For many years, this approach appeared highly effective.

Rather than attacking these higher-level protections, the researcher focused on the physical behavior of the hardware itself. Traditional glitching techniques rely on disrupting timing signals, but the Xbox One’s architecture left little opportunity for that. Instead, the method used here involves voltage glitching, where the power supplied to the processor is briefly disrupted.

These momentary drops in voltage can cause the processor to behave unpredictably, such as skipping instructions or misreading operations. However, the timing must be extremely precise, as even a tiny variation can result in failure or system crashes.

To achieve this level of accuracy, specialized hardware tools were developed to monitor and control electrical signals within the system. This allowed the researcher to closely observe how the console behaves at the silicon level and identify the exact points where interference would be effective.

The resulting “Bliss” technique uses two carefully timed voltage disruptions during the startup process. The first interferes with memory protection mechanisms managed by the ARM Cortex subsystem. The second targets a memory-copy operation that occurs while the system is loading initial data. If both steps are executed correctly, the system is redirected to run code chosen by the attacker, effectively taking control of the boot process.

Unlike many modern exploits, this method does not depend on software flaws that can be corrected through updates. Instead, it targets the boot ROM, which is permanently embedded in the chip during manufacturing. Because this code cannot be modified, the vulnerability cannot be patched. As a result, the exploit allows unauthorized code execution across all system layers, including protected components.

With this level of access, it becomes possible to run alternative operating systems, extract encrypted firmware, and analyze internal system data. This has implications for both security research and digital preservation, as it enables deeper understanding of the console’s architecture and may support efforts to emulate its environment in the future.

Beyond research applications, the findings may also lead to practical tools. There is speculation that the technique could be adapted into hardware modifications similar to modchips, which automate the precise electrical conditions needed for the exploit. Such developments could revive longstanding debates around console modification and software control.

From a security perspective, the immediate impact on Microsoft may be limited, as the Xbox One is no longer the company’s latest platform. Newer systems have adopted updated security designs based on similar principles. However, the discovery serves a lesson for the industry: no system can be considered permanently secure, especially when attacks target the underlying hardware itself.

AI-Driven Phishing Campaign Exploits Device Permissions to Steal Biometric and Personal Data

 

A fresh wave of digital deception, driven by machine learning tools, shifts how hackers grab personal information — no longer relying on password theft but diving into deeper system controls. Spotted by analysts at Cyble Research & Intelligence Labs (CRIL) in early 2026, this operation uses psychological manipulation to unlock powerful device settings usually protected. Rather than brute force, it deploys crafted messages that trick users into handing over trust. 

While earlier scams relied on fake login pages, this one adapts in real time, mimicking legitimate requests so closely they blend into routine tasks. Behind each message lies software trained to mirror human timing and phrasing. Because it evolves with user responses, static defenses struggle to catch it. Access grows step by step — first a small permission, then another, until full control emerges without alarms sounding. What sets it apart isn’t raw power but patience: an attacker that waits, learns, then moves only when ready, staying hidden far longer than expected. 

Unlike typical scams using fake sign-in screens, this operation uses misleading prompts — account confirmations or service warnings — to coax users into granting camera, microphone, and system access. Once authorized, harmful code quietly collects photos, clips, audio files, device specs, contact lists, and location data. Everything is transmitted in real time to attacker-controlled Telegram bots, enabling fast exfiltration without complex backend infrastructure. 

Inside the campaign’s code, signs of AI involvement emerge. Annotations appear too neatly organized — almost machine-taught. Deliberate emoji sequences scatter through script comments. These markers suggest generative models were used repeatedly, making phishing systems faster and more systematic to build. Scale appears larger than manual effort alone would allow. Most of the operation runs counterfeit websites through services including EdgeOne, making it cheap to launch many fraudulent pages quickly. 

These copies mimic well-known apps — TikTok, Instagram, Telegram, even Google Chrome — to appear familiar and safe. The method exploits browser interfaces meant for web functions. When someone engages with a harmful webpage, scripts trigger access requests automatically. If granted, the code activates the webcam, capturing frames as image files. Audio and video are logged simultaneously, transmitting everything directly to the attackers. Fingerprinting then builds a detailed profile: operating system, browser specifics, memory size, CPU benchmarks, network behavior, battery levels, IP address, and physical location. 

Occasionally, the operation attempts to pull contact details — names, numbers, emails — via browser interfaces, widening exposure to connected circles. Fake login screens display progress cues like “photo captured” or “identity confirmed” to appear legitimate. When collection ends, the code shuts down quietly, restoring the screen with traces nearly vanished. 

Security specialists warn that combining personal traits with behavioral patterns gives intruders tools to mimic identities effortlessly, making manipulation precise and nearly invisible. As AI tools grow more accessible, such advanced, layered intrusions are becoming increasingly common.​​​​​​​​​​​​​​​​

Featured