Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

SMS and OTP Bombing Tools Evolve into Scalable, Global Abuse Infrastructure

 

The modern authentication ecosystem operates on a fragile premise: that one-time password requests are legitimate. That assumption is increasingly being challenged. What started in the early 2020s as loosely circulated scripts designed to annoy phone numbers has transformed into a coordinated ecosystem of SMS and OTP bombing tools built for scale, automation, and persistence.

New findings from Cyble Research and Intelligence Labs (CRIL) analyzed nearly 20 actively maintained repositories and found rapid technical progression continuing through late 2025 and into 2026. These tools have moved beyond basic terminal scripts. They now include cross-platform desktop applications, Telegram-integrated automation frameworks, and high-performance systems capable of launching large-scale SMS, OTP, and voice-bombing campaigns across multiple geographies.

Researchers emphasize that the study reflects patterns within a defined research sample and should be viewed as indicative trends rather than a full mapping of the global ecosystem. Even within that limited dataset, the scale and sophistication are significant

SMS and OTP bombing campaigns exploit legitimate authentication endpoints. Attackers repeatedly trigger password resets, registration verifications, or login challenges, overwhelming a victim’s phone with genuine SMS messages or automated voice calls. The result ranges from harassment and disruption to more serious risks such as MFA fatigue.

Across the 20 repositories examined, researchers identified approximately 843 vulnerable API endpoints. These endpoints belonged to organizations across telecommunications, financial services, e-commerce, ride-hailing services, and government platforms. The recurring weaknesses were predictable: inadequate rate limiting, weak or poorly enforced CAPTCHA mechanisms, or both.

Regional targeting was uneven. Roughly 61.68% of observed endpoints—about 520—were linked to infrastructure in Iran. India accounted for 16.96%, approximately 143 endpoints. Additional activity was concentrated in Turkey, Ukraine, and parts of Eastern Europe and South Asia.

The attack lifecycle typically begins with endpoint discovery. Threat actors manually test authentication workflows, probe common API paths such as /api/send-otp or /auth/send-code, reverse-engineer mobile applications to uncover hardcoded API references, or leverage community-maintained endpoint lists shared in public repositories and forums. Once identified, these endpoints are integrated into multi-threaded attack frameworks capable of issuing simultaneous requests at scale.

The technical sophistication of SMS and OTP bombing tools has advanced considerably. Maintainers now offer versions across seven programming languages and frameworks, lowering entry barriers for individuals with limited coding expertise.

Modern toolkits commonly include:
  • Multi-threading to enable parallel API exploitation
  • Proxy rotation to bypass IP-based defenses
  • Request randomization to mimic human behavior
  • Automated retry mechanisms and failure handling
  • Real-time activity dashboards
More concerning is the widespread use of SSL bypass techniques. Approximately 75% of the repositories analyzed disable SSL certificate validation. Instead of relying on properly verified secure connections, these tools deliberately ignore certificate errors, enabling traffic interception or manipulation without interruption. SSL bypass has emerged as one of the most frequently observed evasion strategies.

In addition, 58.3% of repositories randomize User-Agent headers to evade signature-based detection systems. Around 33% exploit static or hardcoded reCAPTCHA tokens, effectively bypassing poorly implemented bot protections.

The ecosystem has also expanded beyond SMS flooding. Voice-bombing capabilities—automated call floods triggered through telephony APIs—are now integrated into several frameworks, broadening the harassment surface.

Commercialization and Data Harvesting Risks

Alongside open-source development, a commercial layer has surfaced. Browser-based SMS and OTP bombing platforms now offer simplified, point-and-click interfaces. Often marketed misleadingly as “prank tools” or “SMS testing services,” these platforms eliminate technical setup requirements.

Unlike repository-based tools that require local execution and configuration, web-based services abstract proxy management, API integration, and automation processes. This significantly increases accessibility.

However, these services frequently operate on a dual-threat model. Phone numbers entered into such platforms are often harvested. The collected data may later be reused in spam campaigns, sold as lead lists, or integrated into broader fraud operations. In effect, users risk exposing both their targets and themselves to ongoing exploitation.

Financial, Operational, and Reputational Impact

For individuals, SMS and OTP bombing can severely disrupt device usability. Effects include degraded performance, overwhelmed message inboxes, exhausted SMS storage, battery drain, and increased risk of MFA fatigue—potentially leading to accidental approval of malicious login attempts. Voice-bombing campaigns further intensify the disruption.

For organizations, the consequences extend well beyond inconvenience.

Financially, each OTP message typically costs between $0.05 and $0.20. An attack generating 10,000 messages can result in expenses ranging from $500 to $2,000. Sustained abuse of exposed endpoints can drive monthly SMS costs into five-figure sums.

Operationally, legitimate users may be unable to receive verification codes, customer support volumes can surge, and authentication delays can impact service reliability. In regulated industries, failure to secure authentication workflows may introduce compliance risks.

Reputational damage compounds these issues. Users quickly associate spam-like behavior with weak security controls, eroding trust and confidence in affected organizations.

As SMS and OTP bombing tools continue to evolve in sophistication and accessibility, the strain on authentication infrastructure underscores the urgent need for stronger rate limiting, adaptive bot detection, and hardened API protections across industries

Malicious AI Chrome Extensions Steal Users Emails and Passwords


30 malicious Chrome extensions used by over 300,000 users are pretending to be AI assistants to steal credentials, browsing information, and email content. Few extensions are still active in the Chrome Web Store and have been downloaded by tens of thousands of users. 

Experts at browser security platform LayerX found the malicious extension campaign and labelled it AiFrame. They discovered that all studied extensions are part of the same malicious attack as they interact with infrastructure under a single domain, tapnetic[.]pro. 

Experts said that the most famous extension in the AiFrame operation had 80,000 users and was termed Gemini AI Sidebar (fppbiomdkfbhgjjdmojlogeceejinadg), but it isn't available in the Chrome Web Store. 

According to BleepingComputer, other extensions with over thousand users are still active on Google's repository for Chrome extensions. The names might be different, but the classification is the same. 

LayerX discovered that all 30 extensions have the same Javascript logic, permissions, internal structure, and backend infrastructure. 

The infected browser add-ons do not apply AI functionality locally. 

This can be risky because publishers can modify the extensions' logic without any update, similar to the Microsoft Office Add-ins. This helps them escape the new review. 

Besides this, the extension takes out page content from the sites that users visit. This includes verification pages via Mozilla's Readability library. 

According to LayerX, a group of 15 extensions exclusively target Gmail data by injecting UI components with a content script that executes at "document_start" on "mail.google.com." The script repeatedly retrieves email thread text using ".textContent" after reading visible email content straight from the DOM. Even email drafts can be recorded, according to the researchers. According to a report released today by LayerX, "the extracted email content is passed into the extension's logic and transmitted to third-party backend infrastructure controlled by the extension operator when Gmail-related features like AI-assisted replies or summaries are invoked."

Additionally, the extensions have a way for remotely triggering speech recognition and transcript creation that uses the "Web Speech API" to provide operators with the results. The extensions may potentially intercept chats from the victim's surroundings, depending on the permissions that are provided. Google has not responded to BleepingComputer's request for comment on LayerX results by the time of publication. For the full list of malicious extensions, it is advised to consult LayerX's list of indicators of compromise. Users should reset the passwords for all accounts if the intrusion is verified.

Researchers Identify Previously Undocumented Malware Used in World Leaks Intrusions

 



Cybersecurity researchers have identified a newly developed malicious software tool being used by the extortion-focused cybercrime group World Leaks, marking a pivotal dent the group’s technical capabilities. According to findings published by the cybersecurity research division of Accenture, the malware has not been observed in prior investigations and appears to be custom-built for covert operations within victim networks. The researchers have designated the tool “RustyRocket” to distinguish it from previously documented malware families.

Analysts explain that RustyRocket functions as a long-term persistence mechanism. Instead of triggering immediate disruption, the malware is designed to quietly embed itself within compromised systems, allowing attackers to remain present for extended periods without raising alarms. This hidden presence enables threat actors to move through internal networks, quietly extract sensitive information, and route network traffic through compromised machines. Security experts involved in the research noted that the tool had operated unnoticed until its recent discovery, surfacing the challenges organizations face in detecting advanced covert threats.

Although World Leaks is commonly categorized as a ransomware group, its operations differ from traditional ransomware campaigns that encrypt files and demand payment for decryption keys. Rather than denying access to data, the group prioritizes unauthorized data collection. Victims are pressured with the threat of having confidential corporate and personal information publicly disclosed if payment demands are not met. This model places reputational damage, regulatory penalties, and legal exposure at the center of the extortion strategy.

The group has publicly claimed responsibility for attacks against large international corporations. In one widely reported incident, World Leaks alleged that a major global sportswear company declined to comply with extortion demands, after which a substantial volume of internal documents was released. As with many threat actor statements, independent verification of the full scope of such claims remains limited, underlining the importance of cautious attribution in cyber incident reporting.

From a technical perspective, RustyRocket is written in the Rust programming language and engineered to operate across both Microsoft Windows and Linux environments. This cross-platform design allows the malware to function in mixed enterprise infrastructures, increasing its usefulness to attackers. Researchers describe the tool as a combined data extraction and network proxy utility, capable of transferring stolen information through multiple layers of encrypted communication. By masking malicious traffic within normal network activity, the malware makes detection by conventional security tools comparatively more difficult.

The tool also incorporates an execution safeguard that requires attackers to supply a pre-encrypted configuration file at runtime. Without this configuration, the malware remains dormant. This feature complicates forensic analysis and reduces the likelihood that automated security systems will successfully analyze or neutralize the tool.

Investigators assess that World Leaks has been active since early 2025 and typically gains initial access through social engineering techniques, misuse of compromised credentials, or exploitation of externally exposed systems. Once inside a network, tools like RustyRocket enable attackers to quietly maintain their presence while systematically collecting data for later extortion.

Security specialists warn that RustyRocket reflects a broader turn in cybercriminal operations toward stealth-based, intelligence-gathering intrusions rather than overtly disruptive attacks. To reduce exposure, organizations are advised to closely monitor unusual outbound data transfers and enforce strict network segmentation. These measures can limit an attacker’s ability to move across systems and reduce the volume of data that can be silently extracted.

The rise of RustyRocket illustrates how extortion groups are increasingly investing in custom malware designed to evade traditional defenses, reinforcing the need for continuous security testing, proactive threat monitoring, and workforce preparedness to counter evolving attack methods.


UK May Enforce Partial Ransomware Payment Ban as Cyber Reforms Advance

Governments across the globe test varied methods to reduce cybercrime, yet outlawing ransomware payouts stands out as especially controversial. A move toward limiting such payments gains traction in the United Kingdom, suggests Jen Ellis, an expert immersed in shaping national responses to ransomware threats.  

Banning ransom payments might come soon in Britain, according to Ellis, who shares leadership of the Ransomware Task Force at the Institute for Security and Technology. While she expects this step, she warns against seeing it as a fix-all move. From her point of view, curbing victim payouts does little to reduce how often hackers strike - since offenders operate beyond such rules. Still, paying ransoms brings moral weight: those funds flow into networks built on digital crime. Though impact may be narrow, letting money change hands rewards illegal behavior. 

Now comes the part where Ellis anticipates UK authorities will boost their overall cybersecurity setup before touching payment rules. Lately, an upgraded Cyber Action Plan has emerged - this one reshapes goals meant to sharpen how the country prepares for and reacts to digital threats. Out in the open now, this document hints at a fresh push to overhaul national defenses online. 

A key new law now moving forward is the Cyber Security and Resilience Bill, having just reached its second parliamentary debate stage. Should it become law, stricter rules on disclosing breaches will apply, while monitoring weak points in supplier networks becomes compulsory for many businesses outside government. With these steps, clearer insight into digital threats emerges - alongside fewer large-scale dangers tied to external vendors. Though details remain under review, accountability shifts noticeably toward proactive defense. 

After advances in these efforts, according to Ellis, officials might consider limiting ransomware payments. Though unclear when or how broadly such limits would take effect, she anticipates they would not apply uniformly. It remains undecided if constraints would affect solely major entities, focus on particular sectors, or permit exceptions based on set conditions. Whether groups allowed to make payments must first gain authorization - especially to align with sanction rules - is also unsettled. 

In talking with the Information Security Media Group lately, Ellis touched on shifts in how ransomware groups operate. Not every group follows the same pattern - some now avoid extreme disruption, though outfits like Scattered Spider still stand out by acting boldly and unpredictably. Payment restrictions came up too, since they might reshape what both hackers and targeted organizations expect from these incidents. 

Working alongside security chiefs and tech firms, Ellis leads NextJenSecurity to deepen insight into digital threats. Her involvement extends beyond the private sector - advising UK government units like the Cabinet Office’s cyber panel. Institutions ranging from the Royal United Services Institute to the CVE Program include her in key functions. Engagement with policy experts and advocacy groups forms part of her broader effort to reshape how online risks are understood.

Tesla Slashes Car Line-Up to Double Down on Robots and AI

 

Tesla is cutting several car models and scaling back its electric vehicle ambitions as it shifts focus towards robotics and artificial intelligence, marking a major strategic turning point for the company. The move comes after Tesla reported its first annual revenue decline since becoming a major EV player, alongside a steep fall in profits that undercut its long-standing image as a hyper-growth automaker.Executives are now presenting AI-driven products, including autonomous driving systems and humanoid robots, as the company’s next big profit engines, even as demand for its vehicles shows signs of cooling in key markets.

According to the company, several underperforming or lower-margin models will be discontinued or phased out, allowing Tesla to concentrate resources on a smaller range of vehicles and on the software and AI platforms that power them. This rationalisation follows intense price competition in the global EV market, especially from Chinese manufacturers, which has squeezed margins and forced Tesla into repeated price cuts over the past year. While the company argues that a leaner line-up will improve efficiency and profitability, the decision raises questions about whether Tesla is stepping back from its once-stated goal of driving a mass-market EV revolution.

Elon Musk has increasingly projected Tesla as an AI and robotics firm rather than a traditional carmaker, highlighting projects such as its Optimus humanoid robot and advanced driver-assistance systems. In recent briefings, Musk and other executives have suggested that robotaxis and factory robots could ultimately generate more value than car sales, if Tesla can achieve reliable full self-driving and scale its robotics platforms. Investors, however, remain divided on whether these long-term bets justify the current volatility in Tesla’s core automotive business.

Analysts say the shift underscores broader turbulence in the EV sector, where slowing demand growth, higher borrowing costs and intensifying competition have forced companies to reassess expansion plans. Tesla’s retrenchment on vehicle models is being closely watched by rivals and regulators, as it may signal a maturing market in which software, AI capabilities and integrated ecosystems matter more than the sheer number of models on offer. At the same time, a pivot towards AI raises fresh scrutiny over safety, data practices and the real-world performance of autonomous systems.

For consumers, the immediate impact is likely to be fewer choices in Tesla’s showroom but potentially faster updates and improvements to the remaining models and their software features. Some owners may welcome the renewed focus on autonomy and smart features, while others could be frustrated if favoured variants are discontinued.As Tesla repositions itself, the company faces a delicate balancing act: reassuring car buyers and shareholders today while betting heavily that its AI and robotics vision will define its future tomorrow.

Largest Ever 31.4 Tbps DDoS Attack Attributed to Aisuru Botnet


 

A surge of traffic unprecedented to the public internet occurred in November 2025 for thirty five seconds. The acceleration was immediate and absolute, peaking at 31.4 terabits per second before dissipating nearly as quickly as it formed. As the result of the AISURU botnet, also known as Kimwolf, the event demonstrated the use of distributed infrastructure to achieve extreme bandwidth saturation over a short period of time. 

Cloudflare has released findings indicating that the incident was the largest distributed denial of service attack disclosed to date as well as contributing to an overall rise in hyper volumetric HTTP DDoS activity observed during the year 2025. In contrast to being an isolated outlier, the November spike is associated with a sustained upward trend in both the scale and operational speed of large-scale DDoS campaigns. 

Throughout the year, Cloudflare's telemetry indicated significant increases in attack frequency and intensity, culminating in a sharp increase in hypervolumetric incidents during the fourth quarter. There has been an increase in observed attack sizes by more than 700 percent since late 2024, reflecting a significant change in bandwidth resources and orchestration techniques available to contemporary botnet operators as compared to late 2024. 31.4 Tbps burst was attributed to AISURU Kimwolf infrastructure, which researchers have linked with multiple coordinated campaigns in 2025.

Automated traffic analysis and inline filtering systems helped spot and mitigate the November event, proving how relying on them is becoming more important to combat high speed volumetric floods. This botnet was also involved in the operation that began on December 19, which has been referred to as The Night Before Christmas. 

At the peak of that campaign, attack volumes were measured at approximately 3 billion packets per second, 4 Tbps of throughput, and 54 million HTTP requests per second. The peak rates were 9 billion packets a second, 24 Tbps, and 205 million requests a second, which shows simultaneous exploitation of application and network layer vectors. These year-end metrics help you understand the operational environment that inspired these campaigns. 

According to Cloudflare, DDoS activity increased by 121 percent during 2025, with defensive systems mitigating an average of 5,376 attacks per hour. The number of aggregated attacks exceeded 47.1 million, more than doubling that of the previous year. It is estimated that 34.4 million network layer attacks took place in the fourth quarter, an increase from 11.4 million in 2024. 

These attacks accounted for 78 percent of all DDoS activity. During the last quarter, DDoS incidents increased 31 percent, while year over year, they increased by 58 percent, suggesting a sustained expansion instead of episodic surges. 

A distinctive component of that growth curve was hyper volumetric attacks. In the fourth quarter alone, 1,824 such incidents were recorded, as compared to 1,304 recorded in the previous quarter and 717 during the first quarter. As a result, attack volumes increased severalfold within a single annual cycle, and not only the frequency of attacks has increased, but the amplitude has also increased notably. 

Combined, the data indicates that the threat landscape has been enhanced by compressed attack windows, increased packet rates, and unprecedented throughput levels, which reinforces concerns that record-breaking DDoS capacity is becoming an iterative benchmark rather than an exceptional event.

It was a calculated extension of the same operational doctrine in the December campaign, known as The Night Before Christmas. As of December 19, 2025, Cloudflare's infrastructure and downstream customers have been subjected to sustained hypervolumetric traffic directed by the botnet, which blends record scale Layer 4 floods with HTTP surges exceeding 200 million requests per second at the application layer. 

In September 2025, this operation exceeded the botnet's own previous benchmark of 29.7 Tbps, which marked a significant increase in bandwidth deployment and request augmentation. Upon examining the campaign, investigators determined that millions of unofficial streaming boxes were conscripted into the campaign, which generated packets and requests rarely seen at such a high rate. 

At its apex, 31.4 Tbps, the attack reached a magnitude that would have exceeded several major providers' publicly disclosed mitigation ceilings. In purely theoretical terms, Akamai Prolexic's capacity of 20 Tbps, Netscout Arbor Cloud's capacity of 15 Tbps, and Imperva's capacity of 13 Tbps would have reached bandwidth utilization levels exceeding 150 to 240 percent under equivalent load based on stated capacities. 

However, this comparison highlights the structural stress such volumes impose on conventional scrubbing architectures when comparing distributed absorption and traffic engineering strategies with real world resilience. In contrast to a single monolithic flood, telemetry from this campaign revealed a pattern of distributed, highly coordinated bursts.

Thousands of discrete attack waves exhibited consistent scaling characteristics, each exhibiting a similar pattern. Ninety-three percent of events reached peak rates between one and five Tbps, while 5.5 percent reached peak rates between five and ten Tbps. There was only a fractional 0.1 percent of events exceeding 30 Tbps, demonstrating that the headline-breaking spike was not only rare, but deliberate from a statistical perspective. 

According to packet rate analysis, 94.5 percent of attacks generated packets between one and five billion per second, while 4 percent peaked at five to ten billion, and 1.5 percent reached ten to fifteen billion packets per second. A number of attack waves were engineered as concentrated bursts rather than prolonged sieges, highlighting the tactical refinement of the operation. 

 There were 9.7 percent of attacks lasting less than 30 seconds, 27.1% lasting between 30 and 60 seconds, and 57.2% lasting 60 to 120 seconds. Only 6% exceeded the two-minute mark, suggesting a focus on high intensity volleys designed to strain defensive thresholds before adaptive mitigation can fully adjust. 

In hyper volumetric incidents, 42.5 percent of incidents were targeted against gaming organizations, while 15.3 percent were targeting IT and services organizations. This distribution indicates that it is aimed at industries with high latency sensitives and infrastructure-dependent infrastructures where even brief disruptions can have a substantial impact on operational and financial performance. 

In the wake of the December offensive, a botnet has gradually evolved into one of the most significant distributed denial of service threats observed over the past few years. Through the compromise of consumer grade devices, the Aisuru operation, which split into an Android-focused Kimwolf variant in August 2025, expanded aggressively.

According to Synthient, Kimwolf infected more than two million unofficial Android TVs, making them into a global attack grid. They built layered command and control architectures using residential proxy networks to make origin infrastructure look bad and make takedown harder. 

Botnet activity captured the attention of the public after it briefly pushed its own domain activity to the top of Cloudflare's global rankings, an outcome achieved as a consequence of artificial traffic amplification rather than organic traffic. Disruption efforts are ongoing. Black Lotus Labs, a division of Lumen Technologies, began counter-operations in early October 2025, disrupting traffic to more than 550 command and control servers connected to Kimwolf and Aisuru. 

Although the network displayed adaptive resilience, the endpoints were rapidly migrating to newly provisioned hosts, frequently using IP address space associated with Resi Rack LLC and recurring autonomous system numbers to reconstitute its control plane, and reconfiguring its control plane in a timely manner. This infrastructure rotation illustrates a trend in botnet engineering which emphasizes redundancy and rapid redeployment as part of operational design rather than as a contingency measure. 

An accelerating level of DDoS activity was evident across the entire internet as the record-setting events unfolded. There will be 47.1 million DDoS incidents in the year 2025, which represents a 121 percent increase over 2024 and a 236 percent increase over 2023. In the past year, automated mitigation systems processed approximately 5,376 attacks per hour, which included approximately 3,925 network level events and 1,451 HTTP layer floods. 

Most of the expansion has occurred at the network layer, with network layer attacks doubling from 11.4 million incidents to 34.4 million incidents year over year. In the fourth quarter alone, 8.5 million such attacks took place, reflecting 152 percent year-over-year growth and 43 percent quarter-over-quarter increase, with network layer vectors accounting for 78 percent of all DDoS activity in that quarter. 

Indicators of scale and sophistication reveal an intensifying threat model. There was a 600 percent increase in network layer attacks exceeding 100 million packets per second over the previous quarter, while those surpassing 1 Tbps increased by 65 percent. Nearly 1 percent of network layer attacks exceeded the 1 million packet per second threshold, emphasizing the increasing use of high intensity traffic bursts designed to stress routing and filtering systems. 

Most HTTP DDoS activity was caused by known botnets, accounting for 71.5 percent, anomalous HTTP attributes accounted for 18.8 percent, fake or headless browser signatures accounted for 5.8 percent, and generic flood techniques accounted for 1.8%. As indicated by the duration analysis, 78.9 percent of HTTP floods ended within ten minutes, suggesting a tactical preference for high impact, compressed attack cycles. 

It has been estimated that roughly three out of each hundred HTTP events qualified as hyper volumetric at the application layer while 69.4 percent of HTTP events remain below 50,000 requests per second, whereas 2.8% exceed 1 million requests per second. More than half of HTTP DDoS attempts were automatically neutralized without human intervention through Cloudflare's real-time botnet detection systems, reflecting an increased reliance on machine learning-driven mitigation frameworks. 

DDoS traffic observed in the fourth quarter exhibited notable changes in source distribution. Bangladesh emerged as the largest origin, replacing Indonesia, which fell to third place. In second place, Ecuador was ranked, while Argentina rose by twenty places to become the fourth largest source. Hong Kong, Ukraine, Vietnam, Taiwan, Singapore, and Peru also contributed significantly.

Analyzing data from autonomous systems indicates that adversaries disproportionately exploit cloud computing platforms and telecommunications infrastructure to gain an edge over their adversaries. In this report, Russia has lost five positions in the rankings, while the United States has lost four positions. 

There were six cloud providers collectively represented in the top ten source networks, including DigitalOcean, Microsoft, Tencent, Oracle, and Hetzner, reflecting the misuse of rapidly deployable virtual machines to generate traffic. The remaining high volume infrastructure has been mainly provided by telecommunications carriers in Asia Pacific, primarily in Vietnam, China, Malaysia, and Taiwan. 

With Cloudflare's globally distributed architecture, despite the extraordinary magnitude of the Night Before Christmas campaign, the load was contained within operational limits owing to Cloudflare's global distribution. The spike of 31.4 Tbps consumed approximately 7 percent of available bandwidth across 330 points of presence, leaving considerable residual bandwidth available for the next few months. 

In this case, the attack was detected and contained autonomously, without triggering any emergency escalation protocols. This episode highlights the gap between the capabilities of adversarial traffic generators and those of smaller providers in terms of their defensive capabilities. 

With volumetric ceilings on the rise and botnets adopting increasingly modular command frameworks, the sustainability of internet-facing services will depend on the availability of hyperscale mitigation infrastructure that can handle not only record-setting spikes in DDoS activity but also an accelerated baseline of global DDoS activity as it continues to grow. These events indicate a trajectory that has clear implications for enterprises, service providers, and infrastructure operators. 

In a world where volumetric thresholds continue to grow and botnets continue to industrialize device compromises at scale, incremental upgrades and reactive control cannot be relied upon to maintain a defensive edge. Mitigation partners must be evaluated based on their demonstrated absorption capacity, architectural distribution, maturity in automated response, and transparency in telemetry.

Edge assets, IoT ecosystems, and cloud workloads must also be hardened in order to prevent them from becoming targets and unwitting launch platforms, as they are increasingly exploited. 

In addition to indicating a structural shift in adversarial capability, the November and December campaigns serve not only as record setting anomalies. Defining resilience in this environment is less about preventing every attack and more about engineering networks that are capable of sustaining, absorbing, and recovering from traffic volumes that were once considered unimaginable.

State-Backed Hackers Are Turning to AI Tools to Plan, Build, and Scale Cyber Attacks

 



Cybersecurity investigators at Google have confirmed that state-sponsored hacking groups are actively relying on generative artificial intelligence to improve how they research targets, prepare cyber campaigns, and develop malicious tools. According to the company’s threat intelligence teams, North Korea–linked attackers were observed using the firm’s AI platform, Gemini, to collect and summarize publicly available information about organizations and employees they intended to target. This type of intelligence gathering allows attackers to better understand who works at sensitive companies, what technical roles exist, and how to approach victims in a convincing way.

Investigators explained that the attackers searched for details about leading cybersecurity and defense companies, along with information about specific job positions and salary ranges. These insights help threat actors craft more realistic fake identities and messages, often impersonating recruiters or professionals to gain the trust of their targets. Security experts warned that this activity closely resembles legitimate professional research, which makes it harder for defenders to distinguish normal online behavior from hostile preparation.

The hacking group involved, tracked as UNC2970, is linked to North Korea and overlaps with a network widely known as Lazarus Group. This group has previously run a long-term operation in which attackers pretended to offer job opportunities to professionals in aerospace, defense, and energy companies, only to deliver malware instead. Researchers say this group continues to focus heavily on defense-related targets and regularly impersonates corporate recruiters to begin contact with victims.

The misuse of AI is not limited to one actor. Multiple hacking groups connected to China and Iran were also found using AI tools to support different phases of their operations. Some groups used AI to gather targeted intelligence, including collecting email addresses and account details. Others relied on AI to analyze software weaknesses, prepare technical testing plans, interpret documentation from open-source tools, and debug exploit code. Certain actors used AI to build scanning tools and malicious web shells, while others created fake online identities to manipulate individuals into interacting with them. In several cases, attackers claimed to be security researchers or competition participants in order to bypass safety restrictions built into AI systems.

Researchers also identified malware that directly communicates with AI services to generate harmful code during an attack. One such tool, HONESTCUE, requests programming instructions from AI platforms and receives source code that is used to build additional malicious components on the victim’s system. Instead of storing files on disk, this malware compiles and runs code directly in memory using legitimate system tools, making detection and forensic analysis more difficult. Separately, investigators uncovered phishing kits designed to look like cryptocurrency exchanges. These fake platforms were built using automated website creation tools from Lovable AI and were used to trick victims into handing over login credentials. Parts of this activity were linked to a financially motivated group known as UNC5356.

Security teams also reported an increase in so-called ClickFix campaigns. In these schemes, attackers use public sharing features on AI platforms to publish convincing step-by-step guides that appear to fix common computer problems. In reality, these instructions lead users to install malware that steals personal and financial data. This trend was first flagged in late 2025 by Huntress.

Another growing threat involves model extraction attacks. In these cases, adversaries repeatedly query proprietary AI systems in order to observe how they respond and then train their own models to imitate the same behavior. In one large campaign, attackers sent more than 100,000 prompts to replicate how an AI model reasons across many tasks in different languages. Researchers at Praetorian demonstrated that a functional replica could be built using a relatively small number of queries and limited training time. Experts warned that keeping AI model parameters secret is not enough, because every response an AI system provides can be used as training data for attackers.

Google, which launched its AI Cyber Defense Initiative in 2024, stated that artificial intelligence is increasingly amplifying the capabilities of cybercriminals by improving their efficiency and speed. Company representatives cautioned that as attackers integrate AI into routine operations, the volume and sophistication of attacks will continue to rise. Security specialists argue that defenders must adopt similar AI-powered tools to automate threat detection, accelerate response times, and operate at the same machine-level speed as modern attacks.


Panera Bread Reportedly Hit by ShinyHunters Data Breach, 14 Million Records Exposed

 

Panera Bread has allegedly fallen victim to a cyberattack carried out by the notorious hacking collective ShinyHunters, with millions of customer records said to have been stolen.

The threat group recently listed Panera Bread, along with CarMax and Edmunds, on its data leak portal. In Panera’s case, attackers claim to have accessed approximately 14 million records. The compromised data reportedly includes customer names, email addresses, mailing addresses, phone numbers, and account-related details. Altogether, around 760MB of compressed data was allegedly extracted from company systems.

In a conversation with The Register, ShinyHunters stated that access to Panera’s network was gained through Microsoft Entra single sign-on (SSO). If accurate, the breach may be connected to a recent alert issued by Okta, which warned that cybercriminals were targeting SSO credentials from Okta, Microsoft, and Google through an advanced voice phishing scheme.

Should that link be confirmed, Panera Bread — which operates thousands of outlets across the United States and Canada — would join a growing roster of companies reportedly compromised through similar tactics, including Crunchbase and Betterment. According to ShinyHunters, both organizations were breached via voice phishing attacks aimed at stealing Okta authentication codes.

To date, most of the affected companies have not publicly addressed the incidents. Betterment is the only firm that has acknowledged a breach, confirming that employees were deceived in a social engineering attack on January 9.

"The unauthorized access involved third-party software platforms that Betterment uses to support our marketing and operations," the company said.

"Once they gained access, the unauthorized individual was able to send a fraudulent, crypto-related message that appeared to come from Betterment to a subset of our customers."

ShinyHunters remains one of the most active ransomware groups currently operating and is notable for abandoning traditional encryption tactics. Rather than locking victims out of their systems, the group focuses solely on stealing sensitive information and pressuring organizations to pay in exchange for keeping the data private — a method that is less complex to deploy but potentially just as profitable.

Snap Faces Lawsuit From Creators Over Alleged AI Data Misuse


 

A legal conflict between online creators and companies dedicated to artificial intelligence has entered an increasingly personal and sharper stage. In recent weeks, well-known YouTubers have filed suits in federal court against Snap alleging that the company built its artificial intelligence capabilities on the basis of their copyrighted material. 

In the complaint, there is a familiar but unresolved question for the digital economy: Can the vast archives of video created by creators that power the internet be repurposed to train commercial artificial intelligence systems without the knowledge or consent of the creators? 

Among the participants in the proposed class action, which was filed in the Central District Court of California on Friday, are internet personalities whose combined YouTube audience exceeds 6.2 million subscribers.

According to Snap, the videos they uploaded to YouTube were scraped to be used as datasets for training AI models on Snapchat, which were scraped in violation of platform rules as well as federal copyright laws.

A similar claim has previously been brought against Nvidia, Meta, and ByteDance by the plaintiffs, claiming that a growing segment of the artificial intelligence industry is relying on creator content without authorization. Specifically, the YouTubers contend that Snap was using large-scale video-language datasets, including HD-VILA-100M, developed for academic and research purposes rather than commercial applications. 

The newly filed complaint specifically challenges Snap's reported use of these datasets. Upon filing the lawsuit, YouTube has asserted that any commercial use would have been subject to YouTube's technological safeguards, terms of service, and licensing restrictions. Plaintiffs argue that these limitations were bypassed in order for Snap's AI systems to incorporate the material. 

In addition to statutory damages, the lawsuit seeks a permanent injunction prohibiting further alleged infringements. Among the participants are the creators of the YouTube channel h3h3, which has a subscriber base of 5.52 million, as well as the golf-focused channels MrShortGame Golf and Golfholics. 

The case is one of the latest in a series of copyright disputes between users and artificial intelligence developers. Recently, publishers, authors, newspapers, artists, and user-generated content platforms have brought similar claims. As reported by the nonprofit Copyright Alliance, over 70 copyright infringement lawsuits have been filed against artificial intelligence companies to date with varying outcomes. 

Several cases involving Meta and a group of authors were resolved in favor of the technology company by a federal judge. In another case involving Anthropic and authors, the company reached a settlement. Several other cases are still pending, which leaves courts with the task of defining how technological innovation intersects with intellectual property rights in our rapidly evolving age.

There are a number of individuals in the U.S. who have uploaded original video content to YouTube and whose works have allegedly been incorporated into the large-scale video datasets referenced in the complaint. The proposed class entails more than just the named plaintiffs, but all U.S-based individuals who have uploaded original video content to YouTube. 

According to Snap's filing, these datasets formed the foundation for the company's artificial intelligence training pipeline, enabling the company to process and ingest creator content in significant quantities. ByteDance, Meta, and Nvidia have been the targets of comparable class complaints, resulting from a coordinated legal strategy intended to challenge industry-wide data acquisition practices by the same plaintiffs. 

Also requesting declaratory judgment that Snap willfully circumvented YouTube’s copyright protection mechanisms, the plaintiffs seek monetary relief along with declaratory judgment. As part of the complaint, statutory damages, costs and interest are requested, as well as an injunction to stop the continued use of the disputed video materials.

There is a central claim in the complaint that Snap developed and refined its generative AI video systems by accessing and copying YouTube content en masse, despite the platform's architecture which permits controlled streaming, but does not provide access to source files for download. 

Snap’s model development is attributed to specific datasets, including HD-VILA-100M and Panda-70M, cited in the complaint. According to the filing, HD-VILA-100M contains metadata that references YouTube videos rather than hosting the audiovisual files themselves. As a result, the plaintiffs maintain that Snap had to retrieve and duplicate the references directly from YouTube’s servers in order to operationalize such datasets for model training.

As a result of this process, they contend that technology protection measures and access controls designed to prevent large-scale extraction and downloading were necessarily bypassed. This lawsuit alleges the use of automated tools and structured workflows to facilitate this retrieval. Moreover, the complaint claims that the datasets segmented individual YouTube uploads into multiple discrete clips, which required repeated access to the same source video as well. 

According to the plaintiffs, this method resulted in millions of separate acts of copying which were essentially identical in nature. In Snapchat’s AI-powered features, those copies were allegedly used to train and enhance text-to-video and image-to-video models.

In spite of license restrictions associated with certain datasets, the filing asserts that these activities were conducted for commercial deployment rather than academic or research purposes. As a final point, the plaintiffs assert Snap's conduct violated YouTube's terms of service and constituted unlawful circumvention of technological safeguards, regardless of whether particular videos had been formally registered with the U.S. Copyright Office. 

Thus, the complaint positions the dispute in context not merely as a disagreement over platform rules but as a broader issue related to the legal and technical limits governing large-scale data ingestion for commercial AI development. 

Depending on the outcome of the litigation, it may have implications that extend far beyond the parties involved. At stake are not only the questions of liability in a single dispute but also the broader compliance landscape that undergirds commercial AI development.

In this case, the court will examine how training data is sourced, whether technical safeguards constitute enforceable measures of protection, and how thoroughly dataset provenance and licensing constraints need to be audited before model deployment is undertaken. 

Technology companies are reminded by this case that data governance frameworks that can be defended, training pipelines that are transparent, and third-party datasets should be rigorously reviewed. Creators and platforms alike should take note of this development as it signals that regulation of artificial intelligence will be shaped less by abstract policy debates and more by detailed judicial scrutiny of the technological processes used in transforming publicly accessible content into machine-learning systems.

Shadowserver Finds 6,000 Exposed SmarterMail Servers Hit by Critical Flaw

 

Over six thousand SmarterMail systems sit reachable online, possibly at risk due to a serious login vulnerability, found by the nonprofit cybersecurity group Shadowserver. Attention grows as hackers increasingly aim for outdated corporate mail setups left unprotected.  


On January 8, watchTowr informed SmarterTools about the security weakness. Released one week later, the patch arrived before an official CVE number appeared. Later named CVE-2026-23760, its severity earned a top-tier rating because of how deeply intruders could penetrate systems. Critical access capabilities made this bug especially dangerous. 

A security notice logged in the NIST National Vulnerability Database points to an issue in earlier releases of SmarterMail - versions before build 9511. This flaw sits within the password reset API, where access control does not function properly. Instead of blocking unknown users, the force-reset-password feature accepts input without requiring proof of identity. Missing checks on both token validity and current login details create an open door. Without needing prior access, threat actors may trigger resets for admin accounts using only known usernames. Such exploitation grants complete takeover of affected systems. 

Attackers can take over admin accounts by abusing this weakness, gaining full access to vulnerable SmarterMail systems through remote code execution. Knowing just one administrator username is enough, according to watchTowr, making it much easier to carry out such attacks. 

More than six thousand SmarterMail servers are now under watch by Shadowserver, each marked as probably exposed. Across North America, over four thousand two hundred sit in this group. Almost a thousand others appear in Asia. Widespread risk emerges where patches remain unused. Organizations slow to update face higher chances of compromise. 

Scans showing over 8,550 vulnerable SmarterMail systems came to light through data provided by Macnica analyst Yutaka Sejiyama, reported to BleepingComputer. Though attackers continue targeting the flaw, response levels across networks vary widely - this uneven pace only adds weight to ongoing worries about delayed fixes.  

On January 21, watchTowr noted it had detected active exploitation attempts. The next day, confirmation came through Huntress, a cybersecurity company spotting similar incidents. Rather than isolated cases, what they saw pointed to broad, automated attacks aimed at exposed servers. 

Early warnings prompted CISA to list CVE-2026-23760 in its active threat database, requiring federal bodies across the U.S. to fix it before February 16. Because flaws like this often become entry points, security teams face rising pressure - especially when hostile groups exploit them quickly. Government systems, along with corporate networks, stand at higher risk once these weaknesses go public. 

On its own, Shadowserver noted close to 800,000 IP addresses showing open Telnet signatures during incidents tied to a serious authentication loophole in GNU Inetutils' telnetd - highlighting how outdated systems still connected to the web can widen security exposure.

HoneyMyte Upgrades CoolClient: New Browser Stealers Target Asia, Europe

 

The HoneyMyte threat group, also known as Mustang Panda or Bronze President, has escalated its cyber espionage efforts by significantly upgrading its CoolClient backdoor malware. This China-linked advanced persistent threat (APT) actor, active since at least 2012, primarily targets government organizations in Asia and Europe to harvest sensitive geopolitical and economic intelligence.

In 2025, security researchers from Kaspersky identified enhanced versions of CoolClient deployed in campaigns hitting countries like Myanmar, Mongolia, Malaysia, Thailand, Russia, and Pakistan.These updates reflect HoneyMyte's ongoing adaptation to evade detection and maximize data theft from high-value targets. CoolClient now employs a multi-stage infection chain, often using DLL side-loading to hijack legitimate applications from vendors like BitDefender, VLC Media Player, and Sangfor. 

This technique allows the malware to masquerade as trusted software while executing malicious payloads for persistence and command-and-control communication. The backdoor supports extensible plugins, including new capabilities to extract HTTP proxy credentials from network traffic—a feature not previously observed in HoneyMyte's arsenal. Combined with tools like ToneShell rootkit, PlugX, and USB worms such as Tonedisk, these enhancements enable deeper system compromise and long-term surveillance.

A standout addition is HoneyMyte's browser credential stealer, available in at least three variants tailored to popular browsers. Variant A targets Google Chrome, Variant B focuses on Microsoft Edge, and Variant C handles multiple Chromium-based browsers like Brave and Opera. The stealer copies login databases to temporary folders, leverages Windows Data Protection API (DPAPI) to decrypt master keys and passwords, then reconstructs full credential sets for exfiltration. This shift toward active credential harvesting, alongside keylogging and clipboard monitoring, marks HoneyMyte's evolution from passive espionage to comprehensive victim surveillance.

Supporting these implants, HoneyMyte deploys scripts for reconnaissance, document exfiltration, and system profiling, often in tandem with CoolClient infections. These campaigns exploit spear-phishing lures mimicking government services in victims' native languages, exploiting regional events for credibility.Earlier variants of CoolClient were analyzed by Sophos in 2022 and Trend Micro in 2023, but 2025 iterations show marked improvements in stealth and modularity. The group's focus on Southeast Asian governments underscores its alignment with Chinese strategic interests.

Organizations face heightened risks from HoneyMyte's refined toolkit, demanding robust defenses like behavioral monitoring for DLL side-loading, browser credential anomalies, and anomalous network traffic. Government entities in targeted regions should prioritize endpoint detection, credential hygiene, and threat intelligence sharing to counter these persistent threats. As HoneyMyte continues innovating—potentially expanding to Europe—proactive measures remain essential against this adaptable adversary.

Palo Alto Pulls Back from Linking China to Spying Campaign


Palo Alto Network pulls back

According to two people familiar with the situation, Palo Alto Networks (PANW.O), which opens a new tab, decided against linking China to a global cyberespionage effort that the company revealed last week out of fear that Beijing would retaliate against the cybersecurity business or its clients. 

The reason 

According to the sources, after Reuters first reported last month that Palo Alto was one of roughly 15 U.S. and Israeli cybersecurity companies whose software had been banned by Chinese authorities on national security grounds, Palo Alto's findings that China was linked to the widespread hacking spree were scaled back.

According to the two individuals, a draft report from Palo Alto's Unit 42, the company's threat intelligence division, said that the prolific hackers, known as "TGR-STA-1030," were associated with Beijing. 

About the report 

The report was released on Thursday of last week. Instead, a more vague description of the hacking group as a "state-aligned group that operates out of Asia" was included in the final report. Advanced attacks are notoriously hard to attribute, and cybersecurity specialists frequently argue about who should be held accountable for digital incursions. Palo Alto executives ordered the adjustment because they were worried about the software prohibition and suspected that it would lead to retaliation from Chinese authorities against the company's employees in China or its customers abroad.

China's reply 

The Chinese Embassy in Washington stated that it is against "any kind of cyberattack." Assigning hacks was described as "a complex technical issue" and it was anticipated that "relevant parties will adopt a professional and responsible attitude, basing their characterization of cyber incidents on sufficient evidence, rather than unfounded speculation and accusations'." 

In early 2025, Palo Alto discovered the hacker collective TGR-STA-1030, the report says, opening a new tab. Palo Alto called the extensive operation "The Shadow Campaigns." It claimed that the spies successfully infiltrated government and vital infrastructure institutions in 37 countries and carried out surveillance against almost every nation on the planet.

After reviewing Palo Alto's study, outside experts claimed to have observed comparable activity that they linked to Chinese state-sponsored espionage activities.





Cross-Platform Spyware Campaigns Target Indian Defense and Government Sectors

 



Cybersecurity researchers have identified multiple coordinated cyber espionage campaigns targeting organizations connected to India’s defense sector and government ecosystem. These operations are designed to infiltrate both Windows and Linux systems using remote access trojans that allow attackers to steal sensitive information and retain long-term control over compromised devices.

The activity involves several spyware families, including Geta RAT, Ares RAT, and DeskRAT. These tools have been associated in open-source security reporting with threat clusters commonly tracked as SideCopy and APT36, also known as Transparent Tribe. Analysts assess that SideCopy has operated for several years and functions as an operational subset of the broader cluster. Rather than introducing radically new tactics, the actors appear to be refining established espionage techniques by expanding their reach across operating systems, using stealthier memory-resident methods, and experimenting with new delivery mechanisms to avoid detection while sustaining strategic targeting.

Across the campaigns, initial access is commonly achieved through phishing emails that deliver malicious attachments or links to attacker-controlled servers. Victims are directed to open Windows shortcut files, Linux executables, or weaponized presentation add-ins. These files initiate multi-stage infection chains that install spyware while displaying decoy documents to reduce suspicion.

One observed Windows attack chain abuses a legitimate system utility to retrieve and execute web-hosted malicious code from compromised, regionally trusted websites. The downloaded component decrypts an embedded library, writes a decoy PDF file to disk, contacts a command-and-control server, and opens the decoy for the user. Before deploying Geta RAT, the malware checks which security products are installed and modifies its persistence technique accordingly to improve survivability. This method has been documented in public research by multiple security vendors.

Geta RAT enables extensive surveillance and control, including system profiling, listing and terminating processes, enumerating installed applications, credential theft, clipboard manipulation, screenshot capture, file management, command execution, and data extraction from connected USB devices.

Parallel Linux-focused attacks begin with a loader written in Go that downloads a shell script to install a Python-based Ares RAT. This malware supports remote command execution, data collection, and the running of attacker-supplied scripts. In a separate infection chain, DeskRAT, a Golang-based backdoor, is delivered through a malicious presentation add-in that establishes outbound communication to retrieve the payload, a technique previously described in independent research.

Researchers note that targets extend beyond defense to policy bodies, research institutions, critical infrastructure, and defense-adjacent organizations within the same trusted networks. The combined deployment of Geta RAT, Ares RAT, and DeskRAT reflects a developing toolkit optimized for stealth, persistence, and long-term intelligence collection.

Exposed Training Opens the Gap for Crypto Mining in Cloud Enviornments


Purposely flawed training apps are largely used for security education, product demonstrations, and internal testing. Tools like bWAPP, OWASP Juice Shop, and DVWA are built to be unsafe by default, making them useful to learn how common attack tactics work in controlled scenarios. 

The problem is not the applications but how they are used in real-world cloud environments. 

Penetra Labs studied how training and demo apps are being deployed throughout cloud infrastructures and found a recurring pattern: apps made for isolated lab use were mostly found revealed to the public internet, operating within active cloud profiles, and linked to cloud agents with larger access than needed. 

Deployment Patterns analysis 

Pentera Labs found that these apps were often used with default settings, extra permissive cloud roles, and minimal isolation. The research found that alot of these compromised training environments were linked to active cloud agents and escalated roles, allowing attackers to infiltrate the vulnerable apps themselves and also tap into the customer’s larger cloud infrastructure. 

In the contexts, just one exposed training app can work as initial foothold. Once the threat actors are able to exploit linked cloud agents and escalated roles, they are accessible to the original host or application. But they can also interact with different resources in the same cloud environment, raising the scope and potential impact of the compromise. 

As part of the investigation, Pentera Labs verified nearly 2,000 live, exposed training application instances, with close to 60% hosted on customer-managed infrastructure running on AWS, Azure, or GCP.

Proof of active exploitation 

The investigation revealed that the exposed training environments weren't just improperly set up. Pentera Labs found unmistakable proof that attackers were actively taking advantage of this vulnerability in the wild. 

About 20% of cases in the larger dataset of training applications that were made public were discovered to have malicious actor-deployed artifacts, such as webshells, persistence mechanisms, and crypto-mining activity. These artifacts showed that exposed systems had already been compromised and were still being abused. 

The existence of persistence tools and active crypto-mining indicates that exposed training programs are already being widely exploited in addition to being discoverable.

Model Context Protocol Security Crisis Deepens as Exposed AI Agents Create Massive Attack Surface

 

The Model Context Protocol (MCP) continues to face mounting security concerns that show no signs of fading. When vulnerabilities were first highlighted last October, early research already pointed to serious risks. Findings from Pynt indicated that installing just 10 MCP plug-ins results in a 92% likelihood of exploitation, with even a single plug-in introducing measurable exposure.

The emergence of Clawdbot significantly altered the threat landscape. The fast-growing personal AI assistant — capable of managing inboxes and generating code autonomously — operates entirely on MCP. Developers who deployed Clawdbot on virtual private servers without reviewing security documentation may have unintentionally exposed their organizations to the protocol’s full attack surface.

(The project rebranded from Clawdbot to Moltbot on January 27 after Anthropic issued a trademark request over the similarity to "Claude.")

Security entrepreneur Itamar Golan anticipated this trajectory. After selling Prompt Security to SentinelOne for an estimated $250 million last year, he issued a public warning on X this week: "Disaster is coming. Thousands of Clawdbots are live right now on VPSs … with open ports to the internet … and zero authentication. This is going to get ugly."

Subsequent internet scans by Knostic reinforced those concerns. Researchers identified 1,862 MCP servers publicly accessible without authentication. Out of 119 servers tested, every single one responded without requesting credentials.

The implication is straightforward: any function automated by Clawdbot can potentially be repurposed by attackers.

Recent vulnerabilities are not isolated anomalies — they stem from fundamental design choices within MCP. Three major CVEs illustrate this pattern:
  • CVE-2025-49596 (CVSS 9.4): Anthropic’s MCP Inspector enabled unauthenticated communication between its web interface and proxy server, making full system compromise possible through a malicious webpage.
  • CVE-2025-6514 (CVSS 9.6): A command injection flaw in mcp-remote — an OAuth proxy downloaded 437,000 times — allowed system takeover when connected to a malicious MCP server.
  • CVE-2025-52882 (CVSS 8.8): Widely used Claude Code extensions exposed unauthenticated WebSocket servers, permitting arbitrary file access and remote code execution.
Three high-severity vulnerabilities within six months, each exploiting different attack vectors, all trace back to the same core issue: authentication in MCP was optional, and many developers treated optional controls as unnecessary.

Further analysis by Equixly found systemic weaknesses across popular MCP implementations. Their review revealed that 43% contained command injection flaws, 30% allowed unrestricted URL fetching, and 22% exposed files beyond intended directories.

Forrester analyst Jeff Pollard summarized the concern in a blog post: "From a security perspective, it looks like a very effective way to drop a new and very powerful actor into your environment with zero guardrails."

The risk is substantial. An MCP server with shell access can enable lateral movement, credential harvesting, and ransomware deployment — all triggered through prompt injection hidden within documents processed by AI agents.

Known Flaws, Slow Mitigation

Security researcher Johann Rehberger disclosed a file exfiltration vulnerability last October, demonstrating how prompt injection could manipulate AI agents into transmitting sensitive files to attacker-controlled accounts.

Anthropic’s launch of Cowork this month extended MCP-based agents to a broader and potentially less security-aware audience. The same vulnerability remains exploitable. PromptArmor recently demonstrated how a malicious document could trick an agent into uploading confidential financial information.

Anthropic’s mitigation guidance states that users should watch for "suspicious actions that may indicate prompt injection."

Investor Olivia Moore of a16z highlighted the broader disconnect after testing Clawdbot over a weekend: "You're giving an AI agent access to your accounts. It can read your messages, send texts on your behalf, access your files, and execute code on your machine. You need to actually understand what you're authorizing."

The challenge is that many users — and many developers — do not fully grasp the scope of access they grant. MCP’s architecture never required them to.

Five Immediate Steps for Security Leaders

Security experts recommend urgent action:
  • Audit MCP deployments immediately. Standard endpoint detection tools often overlook MCP servers because they appear as legitimate Node or Python processes. Specialized visibility is required.
  • Make authentication mandatory. While the MCP specification recommends OAuth 2.1, its SDK does not enforce built-in authentication. All production deployments should require authentication by default.
  • Limit network exposure. MCP servers should bind to localhost unless remote access is strictly necessary and secured. The large number of exposed servers suggests misconfiguration is widespread.
  • Design for inevitable prompt injection. Assume agents will be compromised. Implement access controls accordingly, especially if servers wrap cloud credentials, filesystems, or deployment pipelines.
  • Enforce human approval for sensitive actions. Require explicit confirmation before agents send external communications, delete data, or access confidential resources. AI agents should be treated like fast but literal junior employees who will execute instructions exactly as given.
While security vendors quickly capitalized on MCP-related risks, many enterprises lagged behind. Clawdbot adoption surged in Q4 2025, yet most 2026 security roadmaps lack dedicated AI agent controls.

The divide between developer enthusiasm and organizational governance continues to grow. As Golan warned, "This is going to get ugly."

The pressing question is whether organizations will secure their MCP infrastructure before attackers exploit the opportunity.

Malicious Outlook Add-In Hijack Steals 4,000 Microsoft Credentials

 

A breach transformed the AgreeTo plug-in for Microsoft Outlook - once meant for organizing meetings - into a weapon that harvested over four thousand login details. Though built by a third-party developer and offered through the official Office Add-in Store starting in late 2022, it turned against its intended purpose. Instead of simplifying calendars, it funneled user data to attackers. What began as a practical tool ended up exploited, quietly capturing credentials under false trust. 

Not every tool inside Office apps runs locally - some pull data straight from web addresses. For AgreeTo, its feature lived online through a link managed via Vercel. That address stopped receiving updates when the creator walked away, even though people kept using it. With no one fixing issues, the software faded into silence. Yet Microsoft still displayed it as available for download. Later, someone with harmful intent took control of the unused webpage. From there, they served malicious material under the app’s trusted name. A login screen mimicking Microsoft’s design appeared where the real one should have been, according to analysts at Koi Security. 

Instead of authentic access points, users faced a counterfeit form built to harvest credentials. Hidden scripts ran alongside, silently sending captured data elsewhere. After approval in Microsoft’s marketplace, the add-in escaped further checks. The company examines just the manifest when apps are submitted - nothing beyond that gets verified later. Interface components and features load externally, pulled from servers run by developers themselves. 

Since AgreeTo passed initial review, its updated files came straight from machines now under malicious control. Oversight ended once publication was complete. From inside the attacker’s data pipeline, Koi Security found over 4,000 Microsoft login details already taken. Alongside these, information such as credit card records and responses to bank verification questions had also been collected. While analyzing activity, experts noticed live attempts using the breached logins unfolding in real time. 

Opening the harmful AgreeTo add-on in Outlook displayed a counterfeit Microsoft login screen within the sidebar rather than the expected calendar tool. Resembling an authentic authentication portal, this imitation proved hard to recognize as fraudulent. Once victims submitted their details, those credentials got sent through a Telegram bot interface. Following that transfer, individuals saw the genuine Microsoft sign-in page appear - helping mask what had just occurred. Despite keeping ReadWriteItem access, which enables viewing and editing messages, there's no proof the tool tampered with any emails. 

Behind the campaign, investigators spotted a single actor running several phishing setups aimed at financial services, online connectivity firms, and email systems. Notable because it lives inside Microsoft’s official store, AgreeTo stands apart from past threats that spread via spam, phishing, or malvertising. This marks the first time a verified piece of malware has appeared on the Microsoft Marketplace, according to Oren Yomtov at Koi. He also notes it is the initial harmful Outlook extension spotted actively used outside test environments. 

A removal of AgreeTo from the store was carried out by Microsoft. Anyone keeping the add-in should uninstall it without delay, followed by a password change. Attempts to reach Microsoft for input have been made; no reply came so far.