Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

U.S. Justice Department Seizes $61 Million in Tether Linked to ‘Pig Butchering’ Crypto Scams

The U.S. Department of Justice (DoJ) has revealed that it seized approximately $61 million in Tether connected to fraudulent cryptocurrency...

All the recent news you need to know

Crazy Ransomware Gang Abuses Net Monitor and SimpleHelp for Stealthy Network Persistence

 

Not long ago, security analysts from Huntress spotted someone tied to the Crazy ransomware group using standard employee surveillance and remote assistance programs. This person used common system tools - not custom malware - to stay hidden within company networks. Instead of flashy attacks, they moved quietly through digital environments already familiar to IT teams. What stands out is how ordinary software became part of a stealthy buildup toward data encryption. Behind the scenes, attackers mimic regular maintenance tasks to avoid suspicion. Their method skips complex hacking tricks in favor of blending in. Over time, such tactics make detection harder since alerts resemble routine actions. Rather than breaking in, they act like insiders who belong. Recently, this approach has become more frequent across different cybercrime efforts. Normal-looking tool usage now masks malicious goals deep inside infrastructure.

Throughout several cases reviewed by Huntress, Net Monitor for Employees Professional appeared next to SimpleHelp’s remote access software. Using both together let attackers maintain ongoing, hands-on access to affected machines. This pairing lowered their chances of setting off detection mechanisms. Each tool played a role in staying under the radar. 

A single instance involved deployment of surveillance software through Windows Installer by running msiexec.exe, enabling adversaries to pull the agent straight from the official provider site. With it active, complete remote screen access emerged alongside command launching, data movement, and live observation of machine activity - delivering control similar to admin privileges on compromised devices. 

To tighten their hold, the hackers tried turning on the default admin account via "net user administrator /active:yes." Another layer came when they pulled down SimpleHelp using PowerShell scripts. Files were hidden under names that looked real - some copied Visual Studio’s vshost.exe pattern. Others posed as OneDrive components, tucked inside folders like ProgramData. Despite detection of a single remote component, operations persisted due to multiple deployment layers. 

Occasionally, the SimpleHelp executable appeared under altered names, mimicking standard corporate software files. Observed by analysts, these changes helped it evade immediate recognition. At times, Huntress noticed efforts aimed at weakening Microsoft Defender - achieved by halting and removing related system services - to limit detection on infected devices. One breach showed attackers setting up alert triggers inside SimpleHelp, activated whenever machines reached sites tied to digital currency storage or trading. 

These triggers watched for terms linked to wallet providers, exchange portals, blockchain lookup tools, and online payment systems. Elsewhere, the surveillance tool logged mentions of remote access software like RDP, AnyDesk, TeamViewer, UltraViewer, and VNC, possibly to spot signs of IT staff or security teams logging into affected endpoints. Despite just a single confirmed instance leading to Crazy ransomware activation, Huntress identified shared command servers and repeated file names like “vhost.exe.” These similarities point toward one actor behind both breaches. 

Notably, infrastructure links emerged across incidents. One attack stood out in impact. Yet patterns in execution imply coordination. File artifacts matched closely. Operation methods showed consistency. The evidence ties the events together indirectly. Reuse of tools strengthens that view. Infrastructure overlap was clear. Execution timing varied. Still, the digital fingerprints align. Not just one but two security incidents traced back to stolen SSL VPN login details, showing how shaky remote entry points can open doors. 

Instead of assuming safety, watch for odd patterns - like when trusted remote management software shows up without warning, used now more often by attackers who twist normal tools into stealthy weapons. Despite growing reliance on standard tools by attackers, requiring extra verification steps for every remote login helps block stolen passwords from being useful. Because hackers now blend in using common management programs, watching network behavior closely while limiting who can enter key systems stays essential for company security.

AI Coding Platform Orchids Exposed to Zero-Click Hack in BBC Security Test

 


A BBC journalist has demonstrated an unresolved cybersecurity weakness in an artificial intelligence coding platform that is rapidly gaining users.

The tool, called Orchids, belongs to a new category often referred to as “vibe-coding.” These services allow individuals without programming training to create software by describing what they want in plain language. The system then writes and executes the code automatically. In recent months, platforms like this have surged in popularity and are frequently presented as examples of how AI could reshape professional work by making development faster and cheaper.

Yet the same automation that makes these tools attractive may also introduce new forms of exposure.

Orchids states that it has around one million users and says major technology companies such as Google, Uber, and Amazon use its services. It has also received strong ratings from software review groups, including App Bench. The company is headquartered in San Francisco, was founded in 2025, and publicly lists a team of fewer than ten employees. The BBC said it contacted the firm multiple times for comment but did not receive a response before publication.

The vulnerability was demonstrated by cybersecurity researcher Etizaz Mohsin, who has previously uncovered software flaws, including issues connected to surveillance tools such as Pegasus. Mohsin said he discovered the weakness in December 2025 while experimenting with AI-assisted coding. He reported attempting to alert Orchids through email, LinkedIn, and Discord over several weeks. According to the BBC, the company later replied that the warnings may have been overlooked due to a high volume of incoming messages.

To test the flaw, a BBC reporter installed the Orchids desktop application on a spare laptop and asked it to generate a simple computer game modeled on a news website. As the AI produced thousands of lines of code on screen, Mohsin exploited a security gap that allowed him to access the project remotely. He was able to view and modify the code without the journalist’s knowledge.

At one point, he inserted a short hidden instruction into the project. Soon after, a text file appeared on the reporter’s desktop stating that the system had been breached, and the device’s wallpaper changed to an image depicting an AI-themed hacker. The experiment showed that an outsider could potentially gain control of a machine running the software.

Such access could allow an attacker to install malicious programs, extract private corporate or financial information, review browsing activity, or activate cameras and microphones. Unlike many common cyberattacks, this method did not require the victim to click a link, download a file, or enter login details. Security professionals refer to this technique as a zero-click attack.

Mohsin said the rise of AI-driven coding assistants represents a shift in how software is built and managed, creating new categories of technical risk. He added that delegating broad system permissions to AI agents carries consequences that are not yet fully understood.

Although Mohsin said he has not identified the same flaw in other AI coding tools such as Claude Code, Cursor, Windsurf, or Lovable, cybersecurity academics urge caution. Kevin Curran, a professor at Ulster University, noted that software created without structured review and documentation may be more vulnerable under attack.

The discussion extends beyond coding platforms. AI agents designed to perform tasks directly on a user’s device are becoming more common. One recent example is Clawbot, also known as Moltbot or Open Claw, which can send messages or manage calendars with minimal human input and has reportedly been downloaded widely.

Karolis Arbaciauskas, head of product at NordPass, warned that granting such systems unrestricted access to personal devices can expose users to serious risks. He advised running experimental AI tools on separate machines and using temporary accounts to limit potential damage.

Russia Blocks WhatsApp, Pushes State Surveillance App

 

Russia has effectively erased WhatsApp from its internet, impacting up to 100 million users in a bold move by regulator Roskomnadzor. On Wednesday, the app was removed from the national directory, severing access without prior slowdown warnings, as reported by the Financial Times and Gizmodo. WhatsApp condemned this as an attempt to force users onto a "state-owned surveillance app," highlighting the isolation of millions from secure communication. 

This crackdown escalates Russia's long-running battle against foreign messaging services amid its push for digital sovereignty. Restrictions began in August 2025 with blocks on voice and video calls, citing WhatsApp's failure to aid fraud and terrorism probes. Courts fined the Meta-owned app repeatedly for not removing banned content or opening a local office; by December, speeds dropped 70%, but full removal came after ongoing non-compliance. Telegram faced similar cuts this week, leaving Russians scrambling.

Enter Max, VK's 2025-launched "superapp" modeled on China's WeChat, now aggressively promoted as the national alternative. Preinstalled on devices and endorsed by celebrities and educators, it offers chats, video calls, file sharing up to 4GB, payments via Russia's Faster Payment System, and government services like digital IDs and e-signatures. Unlike WhatsApp's encryption, Max mandates activity sharing with authorities and lacks apparent privacy safeguards, per The Insider. 

The Kremlin justifies the ban as protecting citizens from scams and terrorism while achieving tech independence under sanctions. Spokesman Dmitry Peskov cited Meta's refusal to follow Russian law, though WhatsApp could return via compliance talks. Critics see it as unprecedented speech suppression, building on post-2022 Ukraine invasion censorship labeled "unprecedented" by Amnesty International. Yet past efforts, like the failed 2018 Telegram block, exposed regime overreach.

Users are turning to VPNs or rivals, but Max's rise could cement state surveillance in daily life. This mirrors global trends—France pushes local apps, and Meta faces U.S. spying claims—but Russia's unencrypted alternative raises alarms for privacy. As Putin eyes indefinite rule, such controls signal deepening authoritarianism, forcing 100 million into monitored chats.

Group-IB Warns Supply Chain Attacks Are Becoming a Self-Reinforcing Cybercrime Ecosystem

 

Cybercrime outfits now reshape supply chain intrusions into sprawling, linked assaults - spinning out data leaks, stolen login details, and ransomware in relentless loops, says fresh research by Group-IB. With each trend report, the security group highlights how standalone hacks have evolved: today’s strikes follow blueprints meant to ripple through corporate systems, setting off chains of further break-ins. 

Instead of going after one company just to make money fast, hackers now aim at suppliers, support services, or common software tools - gaining trust-based entry to many users at once. Cases highlighted in recent reports - the Shai-Hulud NPM worm, the break-in at Salesloft, and the corrupted OpenClaw package - all show how problems upstream spread quickly across systems. Not limited to isolated targets, these attacks ripple outward when shared platforms get hit. 

Modern supply chain attacks unfold in linked phases, says Group-IB. One stage might begin with a tainted open-source component spreading malicious code while quietly collecting login details. Following that, attackers may launch phishing efforts - alongside misuse of OAuth tokens - to seize user identities, opening doors to cloud services and development pipelines. Breached data feeds these steps, supplying access keys, corporate connections, and situational awareness required to move sideways across systems. Later comes ransomware, sometimes followed by threats - built on insights gathered during earlier stages of breach. One step enables another, creating loops experts call self-sustaining networks of attack. 

Soon, Group-IB expects artificial intelligence to push this shift further. Because of AI-powered tools, scanning for flaws in vendor networks, software workflows, or browser add-on stores happens almost instantly. These systems let hackers find gaps faster - operating at speeds humans cannot match. 

Expectations point to declining reliance on classic malware, favoring tactics centered on stolen identities. Rather than using obvious harmful software, attackers now mimic authorized personnel, slipping into everyday operational processes. Moving quietly through standard behaviors allows them to stay hidden longer, gradually reaching linked environments. Because they handle sensitive operations like human resources, customer data, enterprise planning, or outsourced IT support, certain platforms draw strong interest from threat actors. 

When a compromise occurs at that level, it opens doors not just to one company but potentially hundreds connected through shared services - multiplying consequences far beyond the initial point of failure. Cases like Salesloft and the breach tied to Oracle in March 2025 show shifts in how data intrusions unfold. Rather than seeking quick payouts, hackers often collect OAuth credentials first. Missteps in third-party connections give them room to move inward. 

Once inside client systems, fresh opportunities open up. Data copying follows naturally. Trust-based communication chains become tools for disguise later. Infected updates spread quietly through established channels. Fraud grows without drawing early attention. Fault lines in digital confidence now shape modern cyber threats, according to Dmitry Volkov, who leads Group-IB. Rather than one-off breaches, what unfolds are ripple effects across systems. Because outside providers act like open doors, companies should treat them as part of their own risk landscape. 

Instead of reacting late, they build models for supply chain risks early. Automated scans track software links continuously. Insight into how information moves becomes essential - without it, gaps stay hidden until exploited. With breaches in supply chains turning into routine operations, protecting confidence among users, collaborations, and code links has shifted from being a backup measure to a core part of today’s security planning. 

What once seemed secondary now shapes the foundation. Trust must hold firm where systems connect - because failure at one point pulls down many. Security can no longer treat relationships as external risks; they are built-in conditions. When components rely on each other, weakness spreads fast. The report frames this shift clearly: resilience lives not just in tools but in verified connections. Not adding layers matters most - it is about strengthening what already ties everything together.

Darktrace Flags Surge in Phishing as Identity-Based Attacks Redefine 2025 Threat Landscape

 

More than 32 million high-confidence phishing emails were identified in 2025, signaling a sharp rise in identity-focused cyberattacks, according to new findings from Darktrace.

The cybersecurity firm analyzed incidents across its global customer network, revealing a year marked by growing automation, overlapping attack techniques, and faster execution by threat actors.

Among the total phishing volume, over 8.2 million emails specifically targeted high-profile individuals and executives, representing more than a quarter of all attempts observed. Additionally, 1.6 million phishing messages were traced to newly registered domains, while 1.2 million leveraged malicious QR codes to lure victims.

The report found that 70% of phishing emails bypassed DMARC authentication checks. Spear-phishing accounted for 41% of attacks, and 38% featured new social engineering strategies. Roughly one-third of the phishing emails exceeded 1,000 characters in length, indicating increasingly sophisticated messaging tactics.

Identity Compromise Emerges as Primary Breach Method

The analysis underscores a major shift in cyber intrusion tactics: identity compromise has surpassed vulnerability exploitation as the leading initial access method. Although Common Vulnerabilities and Exposures (CVEs) rose approximately 20% year-over-year, many exploits were deployed even before vulnerabilities were publicly disclosed.

"Identity has become the attacker's skeleton key. Instead of forcing their way through a firewall, adversaries are logging in with stolen credentials, hijacked tokens and abused permissions, then moving laterally under the cover of legitimacy," commented Shane Barney, CISO at Keeper Security.

"When identity controls are fragmented or overly permissive, attackers don't need novel exploits. They just need access that looks routine."

In the Americas, nearly 70% of reported incidents involved SaaS and Microsoft 365 account takeovers. The manufacturing sector accounted for 17% of documented cases and represented 29% of ransomware incidents in the region. Overall, 47% of global security events tracked in 2025 originated from the Americas.

Regional data further illustrates varying levels of digital resilience and geopolitical pressure.

In Latin America, 44% of incidents stemmed from malware spreading after phishing or credential theft. The education sector was most affected, accounting for 18% of cases. Brazil, Mexico, and Colombia recorded the highest activity levels over the past three years. Across Europe, 58% of security incidents were linked to cloud and email compromise, while 42% were tied to network-based attacks. Africa reported a 60% year-over-year spike in ransomware incidents, with 76% of compromises categorized as network-driven.

In Asia-Pacific and Japan, 84% of organizations indicated that AI-driven threats are already affecting them. However, only 42% said they have formal governance policies in place for safe AI usage.

"Identity is no longer about perimeter-based defense. The rise in AI-based agents and the massively accelerating threat landscape has rendered that approach inadequate, and prompted a shift towards identity as the critical element to enterprise security," SailPoint CEO, Mark McClain, said.

"This report's findings demonstrate that there is now a need for real-time, intelligent, and dynamic identity security, built to govern and secure not just 'who,' or in the case of AI agents, 'what,' has access to the enterprise, but what data they can access and what they are able to do once inside."

Google Observes Threat Actors Deploying AI During Live Network Breaches


 

As synthetic intelligence has become a staple in modern organizations, the field has transformed how they analyze data, make automated decisions, and defend their digital perimeters, moving from experimental labs to the operational bloodstream. However, with the incorporation of these systems deeper into company infrastructure, the technology itself is becoming both a strategic asset and a desirable target for companies. 

Adversaries seeking leverage are now studying, imitating, and in some cases quietly manipulating the same models used to draft code, triage alerts, and streamline workflows. As Fast Company points out, this dual reality is redefining cyber risk, putting AI at the heart of both defense strategy and offensive innovation. 

Insights from Google Cloud's AI Threat Tracker indicate that this shift is accelerating rapidly. There has been a significant increase in model extraction attempts, or "distillation" attempts, which are attempts by attackers to systematically query proprietary artificial intelligence systems to estimate their underlying capabilities, without ever breaching a network in its traditional sense, according to the report. 

Google Threat Intelligence observes that state-aligned and financially motivated actors affiliated with China, Iran, North Korea, and Russia are integrating artificial intelligence tools into nearly every stage of the intrusion lifecycle. 

A growing number of these campaigns include automated reconnaissance, vulnerability mapping, and highly tailored social engineering, which can be carried out with minimal direct human intervention and are increasingly modular, scalable, and effective. 

In accordance with these findings, a newly released assessment by Google Threat Intelligence Group indicates a more operational phase of the threat landscape has begun. This analysis warns that adversaries are no longer considering artificial intelligence a peripheral experiment, but are instead embedding it directly into live attack workflows.

In particular, the targeting and misuse of Gemini models is highlighted, reflecting a broader trend in which commercially available generative systems are systematically evaluated, stressed, and sometimes incorporated into malicious toolchains. 

Researchers documented instances in which active malware strains initiated direct calls to Gemini during runtime through the application programming interface. In the absence of hard-coding all functional components within the malware binary, operators dynamically requested task-specific source code as the intrusion progressed from the model.

As part of the HONESTCUE malware family, structured prompts were issued to obtain C# code snippets that were subsequently executed within its attack chain. By externalizing portions of its logic, the malware was able to reduce its static footprint and complicate detection strategies that utilize signature matching or behavioral heuristics. 

Further, the report describes sustained efforts to perform model extraction attacks, also known as distillation attacks. These operations involved the generation of large volumes of carefully sequenced queries that mapped response patterns and approximated internal decision boundaries by threat actors. 

A key objective of adversaries is to replicate certain aspects of proprietary model performance through iterative analysis, so that they can train substitute systems without being required to bear the entire cost and workload associated with the development of a large-scale model. 

A Google representative has reported that multiple campaigns characterized by abnormal prompt velocity and structured probing activities intended to harvest Gemini's underlying capabilities have been identified and disrupted. This underscores the importance of safeguards which address not only data exfiltration, but also model intelligence protection as well. 

According to CrowdStrike, parallel intelligence strengthens our assessment that artificial intelligence integration is materially slowing down the tempo of modern intrusions. According to the investigators, adversaries are generating single-line commands for reconnaissance, credential harvesting, and data staging on compromised hosts by executing large language models in real time on compromised hosts. This effectively shifts tactical decision-making to on-demand AI systems. 

Metrics indicate that the firm's operational acceleration in 2025 has resulted in an average “breakout time” of eCrime, or the interval between initial access and lateral movement towards high-value assets, dropping to 29 minutes, with the fastest observed transition occurring within 27 seconds.

It was documented that the LAMEHUG malware utilized an external LLM via Hugging Face API to generate dynamic commands for enumerating hardware profiles, processes, services, network configurations and Active Directory domain data based upon minimal embedded prompts. Through outsourcing reconnaissance logic to a model, operators reduced the need for pre-compiled modules, enabling rapid adaptation without modifying the underlying binary. 

A single threat actor can pivot interactively by issuing contextualized instructions that are responsive to the environment in real time as a consequence of this architectural choice. There has been a continued focus on the technology sector, emphasizing its concentration of privileged access paths and its systemic significance throughout the supply chain. 

In addition, CrowdStrike noted that artificial intelligence is extending across multiple phases of the intrusion lifecycle. The number of incidents involving fake CAPTCHA lures grew by 563 percent in 2025 when compared with 2024, indicating the use of generative systems in social engineering. Some moderately resourced groups, such as Punk Spider, have been observed utilizing Gemini and DeepSeek to develop scripts designed to extract credentials from backup archives, terminate defensive services, and erase forensic evidence. 

Scripting that makes use of artificial intelligence (AI) narrows the capability gap between mid-tier criminal operators and highly-trained red teams, enabling coordinated attack chains which combine identity abuse, backup compromise, and domain escalation within a single attack chain. 

Separately, adversaries distributed malicious npm packages that instructed malicious AI command-line tools to generate commands for exfiltrating authentication material and cryptoassets. The incident responders reported the discovery of over 90 environments executing this adversary-developed AI workflow, indicating a trend toward threat actors delegating core post-exploitation functions to intelligent agents within enterprise networks. Model-driven approaches are also being implemented by state-aligned groups.

The Russian-linked collective FANCY BEAR deployed LAMEHUG against Ukrainian government entities, embedding prompts that instructed the model to copy Office documents and PDF documents, gather domain intelligence, and stage system data into text files for exfiltration by embedding prompts into the model. 

Underground forums reflect this operational shift. ChatGPT references outnumbered any other model by a significant margin by 2025, a development attributed less to technical preference than to the platform's widespread recognition and accessibility. This campaign illustrates how quickly reconnaissance, targeting, and staging can be automated once a model has been incorporated within an intrusion toolchain, despite the fact that LLM-enabled malware has not yet been proven more effective than traditional tools. 

It appears that AI will serve as a force multiplier, reducing operating friction and compressing timelines as well as reshaping expectations surrounding attacker speed and adaptability in the near future. 

Furthermore, Google announced that it worked with industry partners to dismantle an infrastructure associated with a suspected China-nexus espionage actor trackable as UNC2814 to emphasize the convergence of cloud platforms and covert command infrastructure. 

Approximately 53 organizations within 42 countries have been compromised as a result of the group's penetration, according to findings published by Google Threat Intelligence Group and Mandiant, with additional suspected intrusions in 20 other countries suspected. It is reported that the actor has maintained access to international government entities and global telecommunications providers across Africa, Asia, and the Americas for an extended period of time since at least 2017.

The investigators observed that the group utilized API calls to legitimate software as a service applications as a command-and-control strategy, intentionally intermixing malicious traffic with routine cloud communication. This operation is supported by the use of a C-based backdoor referred to as GRIDTIDE, which exploits the Google Sheets API for covert communication. 

The malware implements a polling mechanism by embedding command logic within spreadsheet cells, thereby retrieving attacker instructions and returning execution status codes from cell A1. A pair of adjacent cells facilitate bidirectional data transmission, including command output and file exfiltration staging. A second cell stores the compromised host's system metadata. This design facilitates remote data transfer and data tasking while concealing C2 exchanges in otherwise benign API activity. 

Although GRIDTIDE was identified in multiple environments, researchers were unable to definitively determine if every intrusion was based on the same payload. The initial access vectors are currently being investigated; however, UNC2814 has historically exploited vulnerable web servers and edge devices to gain access. 

As part of the post-compromise activity, service accounts were used to move laterally via SSH, living-off-the-land binaries were extensively used for reconnaissance and privilege escalation, as well as persistence through an embedded systemd service, deployed at /etc/systemd/system/xapt.service, which activated a new malware instance from /usr/sbin/xapt once activated.

The campaign also included the deployment of SoftEther VPN Bridge to create outbound encrypted tunnels to external infrastructure, which has previously been associated with multiple China-linked threat clusters. 

Based on forensic analysis, GRIDTIDE appears to have been selectively deployed on endpoints containing personally identifiable information in order to obtain intelligence on specific individuals or entities. Google reported that no confirmed evidence of data exfiltration occurred during the observed activity window. 

The remediation measures included terminating attacker-controlled Google Cloud projects, disabling UNC2814 infrastructure, robbing access to compromised accounts, and blocking the misuse of Google Sheets API endpoints utilized for C2 operations as part of Google's remediation measures. 

An official notification was sent to affected organizations and direct incident response support was provided to confirmed victims following the launch of this campaign, described as one among the most extensive and strategic campaigns that the company has encountered in recent years. All together, these disclosures indicate that artificial intelligence will become embedded in enterprise workflows with the same rigor as privileged infrastructure. 

As AI models, APIs, and service accounts become more integrated into enterprise workflows, they will need to be governed with the same level of rigorousness as privileged infrastructure. Security leaders should ensure that these assets are treated with strict access controls, anomaly detection, and continuous logging as high-value assets.

Increasing the effectiveness of threat hunting programs must include monitoring for abnormal prompt velocity, unusual API polling patterns, and model-driven command execution. As part of this effort, organizations should evaluate identity hygiene, restrict outbound connectivity from sensitive workloads, and harden edge systems that serve as the initial point of entry for hackers. 

An adversary who attempts to blend malicious traffic with legitimate SaaS communications can be contained with cloud-native telemetry, behavioral analytics, and zero-trust segmentation. The development of defensive strategies must therefore proceed parallel to the operationalization of artificial intelligence across reconnaissance, lateral movement, and persistence, with a particular focus on the security of models, the integrity of supply chains, and the coordination of rapid response activities. 

A clear lesson has emerged: Artificial intelligence is no longer peripheral to cyber security risk, but has become integral to both the threat model and the defense architecture designed to counteract it.

Featured