Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Cybercriminals Target Cloud File-Sharing Services to Access Corporate Data

 



Cybersecurity analysts are raising concerns about a growing trend in which corporate cloud-based file-sharing platforms are being leveraged to extract sensitive organizational data. A cybercrime actor known online as “Zestix” has recently been observed advertising stolen corporate information that allegedly originates from enterprise deployments of widely used cloud file-sharing solutions.

Findings shared by cyber threat intelligence firm Hudson Rock suggest that the initial compromise may not stem from vulnerabilities in the platforms themselves, but rather from infected employee devices. In several cases examined by researchers, login credentials linked to corporate cloud accounts were traced back to information-stealing malware operating on users’ systems.

These malware strains are typically delivered through deceptive online tactics, including malicious advertising and fake system prompts designed to trick users into interacting with harmful content. Once active, such malware can silently harvest stored browser data, saved passwords, personal details, and financial information, creating long-term access risks.

When attackers obtain valid credentials and the associated cloud service account does not enforce multi-factor authentication, unauthorized access becomes significantly easier. Without this added layer of verification, threat actors can enter corporate environments using legitimate login details without immediately triggering security alarms.

Hudson Rock also reported that some of the compromised credentials identified during its investigation had been present in criminal repositories for extended periods. This suggests lapses in routine password management practices, such as timely credential rotation or session invalidation after suspected exposure.

Researchers describe Zestix as operating in the role of an initial access broker, meaning the actor focuses on selling entry points into corporate systems rather than directly exploiting them. The access being offered reportedly involves cloud file-sharing environments used across a range of industries, including transportation, healthcare, utilities, telecommunications, legal services, and public-sector operations.

To validate its findings, Hudson Rock analyzed malware-derived credential logs and correlated them with publicly accessible metadata and open-source intelligence. Through this process, the firm identified multiple instances where employee credentials associated with cloud file-sharing platforms appeared in confirmed malware records. However, the researchers emphasized that these findings do not constitute public confirmation of data breaches, as affected organizations have not formally disclosed incidents linked to the activity.

The data allegedly being marketed spans a wide spectrum of corporate and operational material, including technical documentation, internal business files, customer information, infrastructure layouts, and contractual records. Exposure of such data could lead to regulatory consequences, reputational harm, and increased risks related to privacy, security, and competitive intelligence.

Beyond the specific cases examined, researchers warn that this activity reflects a broader structural issue. Threat intelligence data indicates that credential-stealing infections remain widespread across corporate environments, reinforcing the need for stronger endpoint security, consistent use of multi-factor authentication, and proactive credential hygiene.

Hudson Rock stated that relevant cloud service providers have been informed of the verified exposures to enable appropriate mitigation measures.

Ledger Customer Data Exposed After Global-e Payment Processor Cloud Incident

 

A fresh leak of customer details emerged, linked not to Ledger’s systems but to Global-e - an outside firm handling payments for Ledger.com. News broke when affected users received an alert email from Global-e. That message later appeared online, posted by ZachXBT, a known blockchain tracker using a fake name, via the platform X. 

Unexpectedly, a breach exposed some customer records belonging to Ledger, hosted within Global-e’s online storage system. Personal details, including names and email addresses made up the compromised data, one report confirmed. What remains unclear is the number of people impacted by this event. At no point has Global-e shared specifics about when the intrusion took place.  

Unexpected behavior triggered alerts at Global-e, prompting immediate steps to secure systems while probes began. Investigation followed swiftly after safeguards were applied, verifying unauthorized entry had occurred. Outside experts joined later to examine how the breach unfolded and assess potential data exposure. Findings showed certain personal details - names among them - were viewed without permission. Contact records also appeared in the set of compromised material. What emerged from analysis pointed clearly to limited but sensitive information being reached. 

Following an event involving customer data, Ledger confirmed details in a statement provided to CoinDesk. The issue originated not in Ledger's infrastructure but inside Global-e’s operational environment. Because Global-e functions as the Merchant of Record for certain transactions, it holds responsibility for managing related personal data. That role explains why Global-e sent alerts directly to impacted individuals. Information exposed includes records tied to purchases made on Ledger.com when buyers used Global-e’s payment handling system. 

While limited to specific order-related fields, access was unauthorized and stemmed from weaknesses at Global-e. Though separate entities, their integration during checkout links them in how transactional information flows. Customers involved completed orders between defined dates under these service conditions. Security updates followed after discovery, coordinated across both organizations. Notification timing depended on forensic review completion by third-party experts. Each step aimed at clarity without premature disclosure before full analysis. 

Still, the firm pointed out its own infrastructure - platform, hardware, software - was untouched by the incident. Security around those systems remains intact, according to their statement. What's more, since users keep control of their wallets directly, third parties like Global-e cannot reach seed phrases or asset details. Access to such private keys never existed for external entities. Payment records, meanwhile, stayed outside the scope of what appeared in the leak. 

Few details emerged at first, yet Ledger confirmed working alongside Global-e to deliver clear information to those involved. That setup used by several retailers turned out to be vulnerable, pointing beyond a single company. Updates began flowing after detection, though the impact spread wider than expected across shared infrastructure. 

Coming to light now, this revelation follows earlier security problems connected to Ledger. Back in 2020, a flaw at Shopify - the online store platform they used - led to a leak affecting 270,000 customers’ details. Then, in 2023, another event hit, causing financial damage close to half a million dollars and touching multiple DeFi platforms. Though different in both scale and source, the newest issue highlights how reliance on outside vendors can still pose serious threats when handling purchases and private user information.  

Still, Ledger’s online platforms showed no signs of a live breach on their end, yet warnings about vigilance persist. Though nothing points to internal failures, alerts remind customers to stay alert regardless. Even now, with silence across official posts, guidance leans toward caution just the same.

ESA Confirms Cyber Breach After Hacker Claims 200GB Data Theft

 

The European Space Agency (ESA) has confirmed a major cybersecurity incident in the external servers used for scientific cooperation. The hackers who carried out the operation claim responsibility for the breach in a post in the hacking community site BreachForums and claim that over 200 GB worth of data has been stolen, including source code, API tokens, and credentials. This incident highlights escalating cyber threats to space infrastructure amid growing interconnectedness in the sector 

It is alleged that the incident occurred around December 18, 2025, with an actor using the pseudonym "888" allegedly gaining access to ESA's JIRA and Bitbucket systems for an approximate week's duration. ESA claims that the compromised systems represented a "very small number" of systems not on their main network, which only included unclassified data meant for engineering partnerships. As a result, the agency conducted an investigation, secured the compromised systems, and notified stakeholders, while claiming that no mission critical systems were compromised. 

The leaked data includes CI/CD pipelines, Terraform files, SQL files, configurations, and hardcoded credentials, which have sparked supply chain security concerns. As for the leaked data, it includes screenshots from the breach, which show unauthorized access to private repositories. However, it is unclear whether this data is genuine or not. It is also unclear whether the leaked data is classified or not. As for security experts, it is believed that this data can be used for lateral movements by highly sophisticated attackers, even if it is unclassified. 

Adding to the trouble, the Lapsus$ group said they carried out a separate breach in September 2025, disclosing they exfiltrated 500 GB of data containing sensitive files on spacecraft operations, mission specifics, and contractor information involving partners such as SpaceX and Airbus. The ESA opened a criminal investigation, working with the authorities, however the immediate effects were minimized. The agency has been hit by a string of incidents since 2011, including skimmers placed on merchandise site readers. 

The series of breaches may be indicative of the "loosely coupled" regional space cooperative environment featuring among the ESA 23 member states. Space cybersecurity requirements are rising—as evidenced by open solicitations for security products—incidents like this may foster distrust of global partnerships. Investigations continue on what will be the long-term threats, but there is a pressing need for stronger protection.

Targeted Cyberattack Foiled by Resecurity Honeypot


 

There has been a targeted intrusion attempt against the internal environment of Resecurity in November 2025, which has been revealed in detail by the cyber security company. In order to expose the adversaries behind this attack, the company deliberately turned the attack into a counterintelligence operation by using advanced deception techniques.

In response to a threat actor using a low-privilege employee account in order to gain access to an enterprise network, Resecurity’s incident response team redirected the intrusion into a controlled synthetic data honeypot that resembles a realistic enterprise network within which the intrusion could be detected. 

A real-time analysis of the attackers’ infrastructure, as well as their tradecraft, was not only possible with this move, but it also triggered the involvement of law enforcement after a number of evidences linked the activity to an Egyptian-based threat actor and infrastructure associated with the ShinyHunter cybercrime group, which has subsequently been shown to have claimed responsibility for the data breach falsely. 

Resecurity demonstrated how modern deception platforms, with the help of synthetic datasets generated by artificial intelligence, combined with carefully curated artifacts gathered from previously leaked dark web material, can transform reconnaissance attempts by financially motivated cybercriminals into actionable intelligence.

The active defense strategies are becoming increasingly important in today's cybersecurity operations as they do not expose customer or proprietary data.

The Resecurity team reported that threat actors operating under the nickname "Scattered Lapsus$ Hunters" publicly claimed on Telegram that they had accessed the company's systems and stolen sensitive information, such as employee information, internal communications, threat intelligence reports, client data, and more. This claim has been strongly denied by the firm. 

In addition to the screenshots shared by the group, it was later confirmed that they came from a honeypot environment that had been built specifically for Resecurity instead of Resecurity's production infrastructure. 

On the 21st of November 2025, the company's digital forensics and incident response team observed suspicious probes of publicly available services, as well as targeted attempts to access a restricted employee account. This activity was detected by the company's digital forensics and incident response team. 

There were initial traces of reconnaissance traffic to Egyptian IP addresses, such as 156.193.212.244 and 102.41.112.148. As a result of the use of commercial VPN services, Resecurity shifted from containment to observation, rather than blocking the intrusion.

Defenders created a carefully staged honeytrap account filled with synthetic data in order to observe the attackers' tactics, techniques, and procedures, rather than blocking the intrusion. 

A total of 28,000 fake consumer profiles were created in the decoy environment, along with nearly 190,000 mock payment transactions generated from publicly available patterns that contained fake Stripe records as well as fake email addresses that were derived from credential “combo lists.” 

In order to further enhance the authenticity of the data, Resecurity reactivated a retired Mattermost collaboration platform, and seeded it with outdated 2023 logs, thereby convincing the attackers that the system was indeed genuine. 

There were approximately 188,000 automated requests routed through residential proxy networks in an attempt by the attackers to harvest the synthetic dataset between December 12 and December 24. This effort ultimately failed when repeated connection failures revealed operational security shortcomings and revealed some of the attackers' real infrastructure in the process of repeated connection failures exposing vulnerabilities in the security of the system. 

A recent press release issued by Resecurity denies the breach allegation, stating that the systems cited by the threat actors were never part of its production environment, but were rather deliberately exposed honeypot assets designed to attract and observe malicious activity from a distance.

After receiving external inquiries, the company’s digital forensics and incident response teams first detected reconnaissance activity on November 21, 2025, after a threat actor began probing publicly accessible services on November 20, 2025, in a report published on December 24 and shared with reporters. 

Telemetry gathered early in the investigation revealed a number of indications that the network had been compromised, including connections coming from Egyptian IP addresses, as well as traffic being routed through Mullvas VPN infrastructure. 

A controlled honeypot account has been deployed by Resecurity inside an isolated environment as a response to the attack instead of a move to containment immediately. As a result, the attacker was able to authenticate to and interact with systems populated completely with false employee, customer, and payment information while their actions were closely monitored by Resecurity. 

Specifically, the synthetic datasets were designed to replicate the actual enterprise data structures, including over 190,000 fictitious consumer profiles and over 28,000 dummy payment transactions that were formatted to adhere to Stripe's official API specifications, as defined in the Stripe API documentation. 

In the early months of the operation, the attacker used residential proxy networks extensively to generate more than 188,000 requests for data exfiltration, which occurred between December 12 and December 24 as an automated data exfiltration operation. 

During this period, Resecurity collected detailed telemetry on the adversary's tactics, techniques, and supporting infrastructure, resulting in several operational security failures that were caused by proxy disruptions that briefly exposed confirmed IP addresses, which led to multiple operational security failures. 

As the deception continued, investigators introduced additional synthetic datasets, which led to even more mistakes that narrowed the attribution and helped determine the servers that orchestrated the activity, leading to an increase in errors. 

In the aftermath of sharing the intelligence with law enforcement partners, a foreign agency collaborating with Resecurity issued a subpoena request, which resulted in Resecurity receiving a subpoena. 

Following this initial breach, the attackers continued to make claims on Telegram, and their data was also shared with third-party breach analysts, but these statements, along with the new claims, were found to lack any verifiable evidence of actual compromise of real client systems. Independent review found that no evidence of the breach existed. 

Upon further examination, it was determined that the Telegram channel used to distribute these claims had been suspended, as did follow-on assertions from the ShinyHunters group, which were also determined to be derived from a honeytrap environment.

The actors, unknowingly, gained access to a decoy account and infrastructure, which was enough to confirm their fall into the honeytrap. Nevertheless, the incident demonstrates both the growing sophistication of modern deception technology as well as the importance of embedding them within a broader, more resilient security framework in order to maximize their effectiveness. 

A honeypot and synthetic data environment can be a valuable tool for observing attacker behavior. However, security leaders emphasize that the most effective way to use these tools is to combine them with strong foundational controls, including continuous vulnerability management, zero trust access models, multifactor authentication, employee awareness training, and disciplined network segmentation. 

Resecurity represents an evolution in defensive strategy from a reactive and reactionary model to one where organizations are taking a proactive approach in the fight against cyberthreats by gathering intelligence, disrupting the operations of adversaries, and reducing real-world risk in the process. 

There is no doubt that the ability to observe, mislead, and anticipate hostile activity, before meaningful damage occurs, is becoming an increasingly important element of enterprise defenses in the age of cyber threats as they continue to evolve at an incredible rate.

Together, the episodes present a rare, transparent view of how modern cyber attacks unfold-and how they can be strategically neutralized in order to avoid escalation of risk to data and real systems. 

Ultimately, Resecurity's claims serve more as an illustration of how threat actors are increasingly relying on perception, publicity, and speed to shape narratives before facts are even known to have been uncovered, than they serve as evidence that a successful breach occurred. 

Defenders of the case should take this lesson to heart: visibility and control can play a key role in preventing a crisis. It has become increasingly important for organizations to be able to verify, contextualize, and counter the false claims that are made by their adversaries as they implement technical capabilities combined with psychological tactics in an attempt to breach their systems. 

The Resecurity incident exemplifies how disciplined preparation and intelligence-led defense can help turn an attempted compromise into strategic advantage in an environment where trust and reputation are often the first targets. They do this quiet, methodically, and without revealing what really matters when a compromise occurs.

What Happens When Spyware Hits a Phone and How to Stay Safe

 



Although advanced spyware attacks do not affect most smartphone users, cybersecurity researchers stress that awareness is essential as these tools continue to spread globally. Even individuals who are not public figures are advised to remain cautious.

In December, hundreds of iPhone and Android users received official threat alerts stating that their devices had been targeted by spyware. Shortly after these notifications, Apple and Google released security patches addressing vulnerabilities that experts believe were exploited to install the malware on a small number of phones.

Spyware poses an extreme risk because it allows attackers to monitor nearly every activity on a smartphone. This includes access to calls, messages, keystrokes, screenshots, notifications, and even encrypted platforms such as WhatsApp and Signal. Despite its intrusive capabilities, spyware is usually deployed in targeted operations against journalists, political figures, activists, and business leaders in sensitive industries.

High-profile cases have demonstrated the seriousness of these attacks. Former Amazon chief executive Jeff Bezos and Hanan Elatr, the wife of murdered Saudi dissident Jamal Khashoggi, were both compromised through Pegasus spyware developed by the NSO Group. These incidents illustrate how personal data can be accessed without user awareness.

Spyware activity remains concentrated within these circles, but researchers suggest its reach may be expanding. In early December, Google issued threat notifications and disclosed findings showing that an exploit chain had been used to silently install Predator spyware. Around the same time, the U.S. Cybersecurity and Infrastructure Security Agency warned that attackers were actively exploiting mobile messaging applications using commercial surveillance tools.

One of the most dangerous techniques involved is known as a zero-click attack. In such cases, a device can be infected without the user clicking a link, opening a message, or downloading a file. According to Malwarebytes researcher Pieter Arntz, once infected, attackers can read messages, track keystrokes, capture screenshots, monitor notifications, and access banking applications. Rocky Cole of iVerify adds that spyware can also extract emails and texts, steal credentials, send messages, and access cloud accounts.

Spyware may also spread through malicious links, fake applications, infected images, browser vulnerabilities, or harmful browser extensions. Recorded Future’s Richard LaTulip notes that recent research into malicious extensions shows how tools that appear harmless can function as surveillance mechanisms. These methods, often associated with nation-state actors, are designed to remain hidden and persistent.

Governments and spyware vendors frequently claim such tools are used only for law enforcement or national security. However, Amnesty International researcher Rebecca White states that journalists, activists, and others have been unlawfully targeted worldwide, using spyware as a method of repression. Thai activist Niraphorn Onnkhaow was targeted multiple times during pro-democracy protests between 2020 and 2021, eventually withdrawing from activism due to fears her data could be misused.

Detecting spyware is challenging. Devices may show subtle signs such as overheating, performance issues, or unexpected camera or microphone activation. Official threat alerts from Apple, Google, or Meta should be treated seriously. Leaked private information can also indicate compromise.

To reduce risk, Apple offers Lockdown Mode, which limits certain functions to reduce attack surfaces. Apple security executive Ivan Krstić states that widespread iPhone malware has not been observed outside mercenary spyware campaigns. Apple has also introduced Memory Integrity Enforcement, an always-on protection designed to block memory-based exploits.

Google provides Advanced Protection for Android, enhanced in Android 16 with intrusion logging, USB safeguards, and network restrictions.

Experts recommend avoiding unknown links, limiting app installations, keeping devices updated, avoiding sideloading, and restarting phones periodically. However, confirmed infections often require replacing the device entirely. Organizations such as Amnesty International, Access Now, and Reporters Without Borders offer assistance to individuals who believe they have been targeted.

Security specialists advise staying cautious without allowing fear to disrupt normal device use.

Grok AI Faces Global Backlash Over Nonconsensual Image Manipulation on X

 

A dispute over X's internal AI assistant, Grok, is gaining attention - questions now swirl around permission, safety measures online, yet also how synthetic media tools can be twisted. This tension surfaced when Julie Yukari, a musician aged thirty-one living in Rio de Janeiro, posted a picture of herself unwinding with her cat during New Year’s Eve celebrations. Shortly afterward, individuals on the network started instructing Grok to modify that photograph, swapping her outfit for skimpy beach attire through digital manipulation. 

What started as skepticism soon gave way to shock. Yukari had thought the system wouldn’t act on those inputs - yet it did. Images surfaced, altered, showing her with minimal clothing, spreading fast across the app. She called the episode painful, a moment that exposed quiet vulnerabilities. Consent vanished quietly, replaced by algorithms working inside familiar online spaces. 

A Reuters probe found that Yukari’s situation happens more than once. The organization uncovered multiple examples where Grok produced suggestive pictures of actual persons, some seeming underage. No reply came from X after inquiries about the report’s results. Earlier, xAI - the team developing Grok - downplayed similar claims quickly, calling traditional outlets sources of false information. 

Across the globe, unease is growing over sexually explicit images created by artificial intelligence. Officials in France have sent complaints about X to legal authorities, calling such content unlawful and deeply offensive to women. A similar move came from India’s technology ministry, which warned X it did not stop indecent material from being made or shared online. Meanwhile, agencies in the United States, like the FCC and FTC, chose silence instead of public statements. 

A sudden rise in demands for Grok to modify pictures into suggestive clothing showed up in Reuters' review. Within just ten minutes, over one00 instances appeared - mostly focused on younger females. Often, the system produced overt visual content without hesitation. At times, only part of the request was carried out. A large share vanished quickly from open access, limiting how much could be measured afterward. 

Some time ago, image-editing tools driven by artificial intelligence could already strip clothes off photos, though they mostly stayed on obscure websites or required payment. Now, because Grok is built right into a well-known social network, creating such fake visuals takes almost no work at all. Warnings had been issued earlier to X about launching these kinds of features without tight controls. 

People studying tech impacts and advocacy teams argue this situation followed clearly from those ignored alerts. From a legal standpoint, some specialists claim the event highlights deep flaws in how platforms handle harmful content and manage artificial intelligence. Rather than addressing risks early, observers note that X failed to block offensive inputs during model development while lacking strong safeguards on unauthorized image creation. 

In cases such as Yukari’s, consequences run far beyond digital space - emotions like embarrassment linger long after deletion. Although aware the depictions were fake, she still pulled away socially, weighed down by stigma. Though X hasn’t outlined specific fixes, pressure is rising for tighter rules on generative AI - especially around responsibility when companies release these tools widely. What stands out now is how little clarity exists on who answers for the outcomes.

AI Expert Warns World Is Running Out of Time to Tackle High-Risk AI Revolution

 

AI safety specialist David Dalrymple has warned in no unclear terms that humanity may be running out of time to get ready for the dangers of fast-moving artificial intelligence. When talking to The Guardian, the director of programme at the UK government’s Advanced Research and Invention Agency (ARIA) emphasised that AI development is progressing “really fast,” and that no society can safely take these systems being reliable for granted. He is the latest authoritative figure to add to the escalating global anxiety that deployment is outstripping safety research and governance models. 

Dalrymple contended that the existential risk is from AI systems that can do virtually all economically valuable human work but more quickly, at lower cost and at a higher quality. In his mind, these intellectual systems might “outcompete” humans in the very domains that constitute our control over civilization, society and perhaps even planetary-scale decisions. And not just about losing jobs, but about losing strategic dominance in vital sectors, from security to infrastructure management.

He described a scenario in which AI capabilities race ahead of safety mechanisms, triggering destabilisation across both the security landscape and the broader economy. Dalrymple emphasised an urgent need for more technical research into understanding and controlling the behaviour of advanced AI, particularly as systems become more autonomous and integrated into vital services. Without this work, he suggested, governments and institutions risk deploying tools whose failure modes and emergent properties they barely understand. 

 Dalrymple, who among other things consults with ARIA on creating protections for AI systems used in critical infrastructure like energy grids, warned that it is “very dangerous” for policymakers to believe advanced AI will just work as they want it to. He noted that the science needed to fully guarantee reliability is unlikely to emerge in time, given the intense economic incentives driving rapid deployment. As a result, he argued the “next best” strategy is aggressively focusing on controlling and mitigating the downsides, even if perfect assurance is out of reach. 

The AI expert also said that by late 2026, AI systems may be able to do a full day of R&D, including self-improvement in such AI-related fields as mathematics and computer science. Such an innovation would give a further jolt to AI capabilities, and bring society more deeply into what he described as a “high-risk” transition that civilization is mostly “sleepwalking” into. And while he conceded that unsettling developments can ultimately yield benefits, he said the road we appear to be on is one that holds a lot of peril for if safety continues to lag behind capability.

Privacy Takes Center Stage in WhatsApp’s Latest Feature Update

 


There are billions of WhatsApp users worldwide, making it a crucial communication platform for both personal and professional exchanges alike. But its wide spread has also made it an increasingly attractive target for cybercriminals because of its widespread reach and popularity. 

Recent security research has highlighted the possibility of emerging threats exploiting the platform's ecosystem. For example, a technique known as GhostPairing is being used to connect a victim's account to a malicious browser session through the use of a covert link. 

Additionally, separate studies have shown that the app's contact discovery functionality can also be exploited by third parties in order to expose large numbers of phone numbers, as well as photo profiles and other identifying information, causing fresh concerns about the exploitation of large-scale data. 

Despite the fact that WhatsApp relyes heavily on end-to-end encryption to safeguard message content and has made additional efforts to ensure the safety of message content, including passkey-secured backups and privacy-conscious artificial intelligence, security experts emphasize that user awareness remains an important factor in protecting the service from threats. 

When properly enabled, the platform comes with a variety of built-in tools that, when properly deployed, can significantly enhance account security and reduce risk of exposure to evolving digital threats when implemented properly. 

WhatsApp has continued to strengthen its end-to-end encryption framework in response to these evolving risks as well as to increase its portfolio of privacy-centric security controls. In response, it has been said that security analysts believe that limited user awareness often undermines the effectiveness of these safeguards, causing many account holders to not properly configure the protections that are already available to them. 

WhatsApp's native privacy settings can be an effective tool to prevent unauthorised access, curb data misuse, and reduce the risk of account takeover if they are properly enabled. There is an increased importance for this matter, especially because the platform is routinely used to exchange sensitive information, such as Aadhaar information and bank credentials, as well as one-time passwords, personal images, and official documents, on a daily basis.

In accordance with expert opinion, lax privacy configurations can put sensitive personal data at risk of fraud, identity theft, and social engineering attacks, while even a modest effort to review and tighten privacy controls can significantly improve one's digital security posture. It has come as a result of these broader privacy debates that the introduction of the Meta AI within WhatsApp has become a focus of concern for both users and privacy advocates. 

The AI chatbot, which can be accessed via a persistent blue icon on the Chats screen, will enable users to generate images and receive responses to prompts, but its continuous presence has sparked concerns over data handling, consent management, and user control over the chatbot. 

Despite WhatsApp's claims that only messages shared on the platform intentionally will be processed by the chatbot, many users are uneasy about the inability of the company to disable or remove Meta AI, especially since the company is unsure of the policies regarding data retention, training AI, and possible third-party access. 

Despite the company's caution against sharing sensitive personal information with the chatbot, users may still be able to use this data in order to refine the model as a whole, implicitly acknowledging the possibility of doing so. 

In light of this backdrop, WhatsApp has rolled out a feature aimed at protecting users from one another in lieu of addressing concerns associated with AI integration directly. It is designed to create an additional layer of confidentiality within selected conversations, and eliminates the use of Meta AI within those threads so that end-to-end encryption is maintained during user-to-user conversations. This framework reinforces the concept of end-to-end encryption at each level of the user-to-user conversation. 

As a result, many critics of this technology contend that while it is successful in safeguarding sensitive information comprehensively, it has limitations, such as allowing screenshots and manual saving of content. This limits its ability to provide comprehensive information protection.

The feature may temporarily reduce the anxiety surrounding Meta AI's involvement in private conversations, but experts claim it does little to resolve deeper concerns about transparency, consent, and control over the collection and use of data by AI systems.

In the future, WhatsApp will eventually need to address those concerns in a more direct manner in the course of rolling out additional updates. WhatsApp continues to serve as a primary channel for workplace communication, but security experts warn that convenience has quietly outpaced caution as it continues to consolidate its position.

Despite the fact that many professionals still use the default settings of their accounts, there are still risks associated with hijacking, impersonation, and data theft, which go far beyond the risks to your personal privacy, client privacy, and brand reputation.

There are several layers of security that are widely available, including two-step authentication, device management, biometric app locks, encrypted backups, and regular privacy checks, all of which remain underutilized despite their proven effectiveness at preventing common takeovers and phishing attempts. 

It must be noted that experts emphasize that technical controls alone are not sufficient to prevent cybercriminals from exploiting vulnerabilities. Human error remains one of the most exploited vulnerabilities, especially since attackers are increasingly using WhatsApp for social engineering scams, voice phishing, and impersonation of executives.

There has been an upward trend in the adoption of structured phishing simulation and awareness programs in recent years, which, according to industry data, can significantly reduce breach costs and employee susceptibility to attacks, as well as employees' privacy concerns. 

It is becoming increasingly important for organizations to take action to safeguard sensitive conversations in a climate where messaging apps have become both indispensable tools and high-value targets, through the disciplined application of WhatsApp's built-in protections and sustained investment in user training. 

The development of these developments, taken together, underscores the widening gap between WhatsApp's security capabilities and how it is used in reality. As the app continues to evolve into a hybrid space for personal communication, business coordination, and AI-assisted interactions, privacy and data protection concerns are growing as it develops into an increasingly hybrid platform. 

Various attack techniques have advanced over the years, but the combination of these techniques, the opaque integration of artificial intelligence, and the widespread reliance on default settings has resulted in an environment where users have become increasingly responsible for their own security. 

There has been some progress on WhatsApp's security, in terms of introducing meaningful safeguards, and it has also announced further updates, but their ultimate impact relies on informed adoption, transparent governance, and sustained scrutiny from regulators, as well as the security community. 

While clearer boundaries are being established around data use and user control, protecting conversations on one of the world's most popular messaging platforms will continue to be a technical challenge, but also a test of trust between users and the service they rely upon on a daily basis.

North Korean Hackers Abuse VS Code Projects in Contagious Interview Campaign to Deploy Backdoors

 

North Korea–linked threat actors behind the long-running Contagious Interview campaign have been seen leveraging weaponized Microsoft Visual Studio Code (VS Code) projects to trick victims into installing a backdoor on their systems.

According to Jamf Threat Labs, this activity reflects a steady refinement of a technique that first came to light in December 2025. The attackers continue to adapt their methods to blend seamlessly into legitimate developer workflows.

"This activity involved the deployment of a backdoor implant that provides remote code execution capabilities on the victim system," security researcher Thijs Xhaflaire said in a report shared with The Hacker News.

Initially revealed by OpenSourceMalware last month, the attack relies on social engineering job seekers. Targets are instructed to clone a repository hosted on platforms such as GitHub, GitLab, or Bitbucket and open it in VS Code as part of an alleged hiring assessment.

Once opened, the malicious repository abuses VS Code task configuration files to run harmful payloads hosted on Vercel infrastructure, with execution tailored to the victim’s operating system. By configuring tasks with the "runOn: folderOpen" option, the malware automatically runs whenever the project or any file within it is opened in VS Code. This process ultimately results in the deployment of BeaverTail and InvisibleFerret.

Later versions of the campaign have introduced more complex, multi-stage droppers concealed within task configuration files. These droppers masquerade as benign spell-check dictionaries, serving as a fallback if the malware cannot retrieve its payload from the Vercel-hosted domain.

As with earlier iterations, the obfuscated JavaScript embedded in these files executes immediately when the project is opened in the integrated development environment (IDE). It connects to a remote server ("ip-regions-check.vercel[.]app") and runs any JavaScript code sent back. The final payload stage consists of yet another heavily obfuscated JavaScript component.

Jamf also identified a newly observed infection method that had not been documented previously. While the initial lure remains the same—cloning and opening a malicious Git repository in VS Code—the execution path changes once the repository is trusted.

"When the project is opened, Visual Studio Code prompts the user to trust the repository author," Xhaflaire explained. "If that trust is granted, the application automatically processes the repository's tasks.json configuration file, which can result in embedded arbitrary commands being executed on the system."
"On macOS systems, this results in the execution of a background shell command that uses nohup bash -c in combination with curl -s to retrieve a JavaScript payload remotely and pipe it directly into the Node.js runtime. This allows execution to continue independently if the Visual Studio Code process is terminated, while suppressing all command output."

The JavaScript payload, delivered from Vercel, contains the core backdoor logic. It establishes persistence, gathers basic system information, and maintains communication with a command-and-control server to enable remote code execution and system profiling.

In at least one observed incident, Jamf noted additional JavaScript being executed approximately eight minutes after the initial compromise. This secondary payload beacons to the server every five seconds, executes further JavaScript instructions, and can delete traces of its activity upon command. Researchers suspect the code may have been generated with the help of artificial intelligence (AI), based on the language and inline comments found in the source.

Actors linked to the Democratic People's Republic of Korea (DPRK) are known to aggressively target software developers, especially those working in cryptocurrency, blockchain, and fintech environments. These individuals often possess elevated access to financial systems, wallets, and proprietary infrastructure.

By compromising developer accounts and machines, attackers could gain access to sensitive source code, internal platforms, intellectual property, and digital assets. The frequent tactical changes observed in this campaign suggest an effort to improve success rates and further the regime’s cyber espionage and revenue-generation objectives.

The disclosure coincides with findings from Red Asgard, which investigated a malicious repository abusing VS Code tasks to install a full-featured backdoor known as Tsunami (also called TsunamiKit), along with the XMRig cryptocurrency miner. Separately, Security Alliance reported on a similar attack where a victim was contacted on LinkedIn by actors posing as the CTO of a project named Meta2140. The attackers shared a Notion[.]so page containing a technical test and a Bitbucket link hosting the malicious code.

Notably, the attack framework includes multiple fallback mechanisms. These include installing a rogue npm package called "grayavatar" or executing JavaScript that downloads an advanced Node.js controller. This controller runs five modules designed to log keystrokes, capture screenshots, scan the home directory for sensitive data, replace clipboard wallet addresses, steal browser credentials, and maintain persistent communication with a remote server.

The malware further establishes a parallel Python environment using a stager script that supports data exfiltration, cryptocurrency mining via XMRig, keylogging, and the installation of AnyDesk for remote access. The Node.js and Python components are tracked as BeaverTail and InvisibleFerret, respectively.

Collectively, these observations show that the state-sponsored group is testing several delivery mechanisms simultaneously to maximize the chances of successful compromise.

"While monitoring, we've seen the malware that is being delivered change very quickly over a short amount of time," Jaron Bradley, director of Jamf Threat Labs, told The Hacker News. It's worth noting that the payload we observed for macOS was written purely in JavaScript and had many signs of being AI assisted. It's difficult to know exactly how quickly attackers are changing their workflows, but this particular threat actor has a reputation for adapting quickly."

To reduce exposure, developers are urged to remain cautious when handling third-party repositories—particularly those shared during hiring exercises—carefully review source code before opening it in VS Code, and limit npm installations to trusted, well-vetted packages.

"This activity highlights the continued evolution of DPRK-linked threat actors, who consistently adapt their tooling and delivery mechanisms to integrate with legitimate developer workflows," Jamf said. "The abuse of Visual Studio Code task configuration files and Node.js execution demonstrates how these techniques continue to evolve alongside commonly used development tools."

Geopolitical Conflict Is Increasing the Risk of Cyber Disruption




Cybersecurity is increasingly shaped by global politics. Armed conflicts, economic sanctions, trade restrictions, and competition over advanced technologies are pushing countries to use digital operations as tools of state power. Cyber activity allows governments to disrupt rivals quietly, without deploying traditional military force, making it an attractive option during periods of heightened tension.

This development has raised serious concerns about infrastructure safety. A large share of technology leaders fear that advanced cyber capabilities developed by governments could escalate into wider cyber conflict. If that happens, systems that support everyday life, such as electricity, water supply, and transport networks, are expected to face the greatest exposure.

Recent events have shown how damaging infrastructure failures can be. A widespread power outage across parts of the Iberian Peninsula was not caused by a cyber incident, but it demonstrated how quickly modern societies are affected when essential services fail. Similar disruptions caused deliberately through cyber means could have even more severe consequences.

There have also been rare public references to cyber tools being used during political or military operations. In one instance, U.S. leadership suggested that cyber capabilities were involved in disrupting electricity in Caracas during an operation targeting Venezuela’s leadership. Such actions raise concerns because disabling utilities affects civilians as much as strategic targets.

Across Europe, multiple incidents have reinforced these fears. Security agencies have reported attempts to interfere with energy infrastructure, including dams and national power grids. In one case, unauthorized control of a water facility allowed water to flow unchecked for several hours before detection. In another, a country narrowly avoided a major blackout after suspicious activity targeted its electricity network. Analysts often view these incidents against the backdrop of Europe’s political and military support for Ukraine, which has been followed by increased tension with Moscow and a rise in hybrid tactics, including cyber activity and disinformation.

Experts remain uncertain about the readiness of smart infrastructure to withstand complex cyber operations. Past attacks on power grids, particularly in Eastern Europe, are frequently cited as warnings. Those incidents showed how coordinated intrusions could interrupt electricity for millions of people within a short period.

Beyond physical systems, the information space has also become a battleground. Disinformation campaigns are evolving rapidly, with artificial intelligence enabling the fast creation of convincing false images and videos. During politically sensitive moments, misleading content can spread online within hours, shaping public perception before facts are confirmed.

Such tactics are used by states, political groups, and other actors to influence opinion, create confusion, and deepen social divisions. From Eastern Europe to East Asia, information manipulation has become a routine feature of modern conflict.

In Iran, ongoing protests have been accompanied by tighter control over internet access. Authorities have restricted connectivity and filtered traffic, limiting access to independent information. While official channels remain active, these measures create conditions where manipulated narratives can circulate more easily. Reports of satellite internet shutdowns were later contradicted by evidence that some services remained available.

Different countries engage in cyber activity in distinct ways. Russia is frequently associated with ransomware ecosystems, though direct state involvement is difficult to prove. Iran has used cyber operations alongside political pressure, targeting institutions and infrastructure. North Korea combines cyber espionage with financially motivated attacks, including cryptocurrency theft. China is most often linked to long-term intelligence gathering and access to sensitive data rather than immediate disruption.

As these threats manifest into serious matters of concern, cybersecurity is increasingly viewed as an issue of national control. Governments and organizations are reassessing reliance on foreign technology and cloud services due to legal, data protection, and supply chain concerns. This shift is already influencing infrastructure decisions and is expected to play a central role in security planning as global instability continues into 2026.

Google Gemini Calendar Flaw Allows Meeting Invites to Leak Private Data

 

Though built to make life easier, artificial intelligence helpers sometimes carry hidden risks. A recent study reveals that everyday features - such as scheduling meetings - can become pathways for privacy breaches. Instead of protecting data, certain functions may unknowingly expose it. Experts from Miggo Security identified a flaw in Google Gemini’s connection to Google Calendar. Their findings show how an ordinary invite might secretly gather private details. What looks innocent on the surface could serve another purpose beneath. 

A fresh look at Gemini shows it helps people by understanding everyday speech and pulling details from tools like calendars. Because the system responds to words instead of rigid programming rules, security experts from Miggo discovered a gap in its design. Using just text that seems normal, hackers might steer the AI off course. These insights, delivered openly to Hackread.com, reveal subtle risks hidden in seemingly harmless interactions. 

A single calendar entry is enough to trigger the exploit - no clicking, no downloads, no obvious red flags. Hidden inside what looks like normal event details sits coded directions meant for machines, not people. Rather than arriving through email attachments or shady websites, the payload comes disguised as routine scheduling data. The wording blends in visually, yet when processed by Gemini, it shifts into operational mode. Instructions buried in plain sight tell the system to act without signaling intent to the recipient. 

A single harmful invitation sits quietly once added to the calendar. Only after the user poses a routine inquiry - like asking about free time on Saturday - is anything set in motion. When Gemini checks the agenda, it reads the tainted event along with everything else. Within that entry lies a concealed instruction: gather sensitive calendar data and compile a report. Using built-in features of Google Calendar, the system generates a fresh event containing those extracted details. 

Without any sign, personal timing information ends up embedded within a new appointment. What makes the threat hard to spot is its invisible nature. Though responses appear normal, hidden processes run without alerting the person using the system. Instead of bugs in software, experts point to how artificial intelligence understands words as the real weak point. The concern grows as behavior - rather than broken code - becomes the source of danger. Not seeing anything wrong does not mean everything is fine. 

Back in December 2025, problems weren’t new for Google’s AI tools when it came to handling sneaky language tricks. A team at Noma Security found a gap called GeminiJack around that time. Hidden directions inside files and messages could trigger leaks of company secrets through the system. Experts pointed out flaws deep within how these smart tools interpret context across linked platforms. The design itself seemed to play a role in the vulnerability. Following the discovery by Miggo Security, Google fixed the reported flaw. 

Still, specialists note similar dangers remain possible. Most current protection systems look for suspicious code or URLs - rarely do they catch damaging word patterns hidden within regular messages. When AI helpers get built into daily software and given freedom to respond independently, some fear misuse may grow. Unexpected uses of helpful features could lead to serious consequences, researchers say.

Ingram Micro Reveals Impact of Ransomware Attack on Employee Records


 

Ingram Micro quietly divulged all the personal details of their employees and job applicants last summer after a ransomware attack at the height of the summer turned into a far-reaching data exposure, exposing sensitive information about their employees and job applicants and illustrating the growing threat of cybercrime. 

A significant breach at one of the world's most influential technology supply-chain providers has been revealed in the July 2025 attack, in which the company confirms that records linked to more than 42,000 people were compromised, marking the most significant breach of the company's history. It is evident that in the wake of the disruptions caused by older, high-profile cybercriminals, emerging ransomware groups are swiftly targeting even the most established businesses. 

These groups are capitalizing on disrupting these older, high-profile cyber criminal operations by swiftly attacking even the most established businesses. It is a stark reminder to manufacturers, distributors, and mid-market companies that depend on Ingram Micro for global logistics, cloud platforms, and managed services to stay protected from cybersecurity risks, and the breach serves as a warning that cybersecurity risk does not end within an organization's boundaries, as third-party cyber-incidents are becoming increasingly serious and problematic. 

The largest distributor of business-to-business technology, Ingram Micro, operates on a global scale. The company employs more than 23,500 associates, serves more than 161,000 customers, and reported net sales of $48 billion in 2024, which was much greater than the previous year's gross sales of $6 billion. 

As stated in the notification letters to the Maine Attorney General and distributed to affected individuals, the attackers obtained documents containing extensive information, including Social Security numbers, that they had stolen. 

There was a security incident involving the company on July 3rd, 2025, and, in its disclosure, the company indicated that an internal investigation was immediately launched, which determined that an unauthorized third party had access to and removed files from internal repositories between July 2 and July 3rd, 2025. 

In addition to the information contained in the compromised records, there were also information regarding current and former employees and potential job applicants, including names, contact details, birthdates, and government-issued identification numbers such as Social Security numbers, driver's license numbers, and passport numbers, as well as employment records in certain cases. 

A major attack on Ingram Micro's infrastructure may also have caused widespread disruptions to internal operations, as well as taking the company's website offline for a period of time, forcing the company to instruct its employees to work remotely as remediation efforts were underway. 

In spite of the fact that the company does not claim the breach was the result of a particular threat actor, it confirms that ransomware was deployed during the incident, in line with earlier reports linking the incident with the SafePay ransomware group, which later claimed responsibility and claimed to have stolen about 3.5 terabytes of data, and then published the name of the company on its dark web leaks.

In addition to drawing renewed attention to the systemic threat posed by attacks on central technology distributors, the incident also shed light on the risk that a single compromise can have a ripple effect across the entire digital supply chain as well. 

Analysts who examined the Ingram Micro intrusion claim that the ransomware was designed to be sophisticated, modular, and was modeled after modern malware campaigns that are operated by operators. The malicious code unfolded in carefully sequenced stages, with the lightweight loader establishing persistence and neutralizing baseline security controls before the primary payload was delivered.

The attackers subsequently developed components that enabled them to move laterally through internal networks by exploiting cached authentication data and directory services in order to gain access to additional privileges and harvest credentials. The attackers also employed components designed to escalate privileges and harvest credentials. 

The spread across accessible systems was then automated using a dedicated propagation engine, while at the same time manual intervention was still allowed to prioritize high-value targets using a dedicated propagation engine. As part of the attack, the encryption engine used a combination of industry-grade symmetric cryptography and asymmetric key protection to secure critical data, effectively locking that data beyond recovery without the cooperation of the attackers. 

As an extension of the encryption process, a parallel exfiltration process used encrypted web traffic to evade detection to quietly transfer sensitive files to external command-and-control infrastructure. Ultimately, ransom notes were released in order to exert pressure through both operational disruptions as well as the threat of public data exposure, which culminated in the deployment of ransom notes. 

The combination of these elements illustrates exactly how contemporary ransomware has evolved into a hybrid threat model-a model that combines automation, stealth, and human oversight-and why breaches at key nodes within the technology ecosystem can have a far-reaching impact well beyond the implications of one organization. 

When Ingram Micro discovered that its data had been compromised, the company took a variety of standard incident response measures to address it, including launching a forensic investigation with the help of an external cybersecurity firm, notifying law enforcement and relevant regulators, and notifying those individuals whose personal information may have been compromised. 

Additionally, the company offered two years of free credit monitoring and identity theft protection to all customers for two years. It has been unclear who the attackers are, but the SafePay ransomware group later claimed responsibility, alleging in its dark web leak site that the group had stolen 3.5 terabytes of sensitive data. Those claims, however, are not independently verified, nor is there any information as to what ransom demands have been made.

The attack has the hallmarks of a modern ransomware-as-a-service attack, with a custom malware being deployed through a well-established framework that streamlines intrusion, privilege escalation, lateral movement, data exfiltration, and data encryption while streamlining intrusion, privilege escalation, lateral movement, and data encryption techniques.

As such, these campaigns usually take advantage of compromised credentials, phishing schemes, and unpatched vulnerabilities to gain access to the victim. They then combine double-extortion tactics—locking down systems while siphoning sensitive data—with the goal of putting maximum pressure on them. 

During the event, Ingram Micro's own networks were disrupted, which caused delays across global supply chains that depended on Ingram Micro's platforms, causing disruptions as well as disruptions to transactions. There is an opportunity for customers, partners, and the wider IT industry to gain a better understanding of the risks associated with concentration of risk in critical vendors as well as the potentially catastrophic consequences of a relatively small breach at a central node.

A number of immediate actions were taken by Ingram Micro in the aftermath of the attack, including implementing the necessary measures to contain the threat, taking all affected systems offline to prevent further spread of the attack, and engaging external cybersecurity specialists as well as law enforcement to support the investigation and remediation process. 

As quickly as possible, the company restored access to critical platforms, gradually restoring core services, and maintained ongoing forensic analysis throughout the day to assess the full extent of the intrusion, as well as to assure its customers and partners that the company was stable. It is not only the operational response that has been triggered by the incident, but the industry has largely reflected on the lessons learned from a similar attack. 

It is apparent that security experts are advocating resilience-driven strategies such as zero trust access models, network microsegmentation, immutable backup architectures, and continuous threat monitoring in order to limit breaches' blast radius. 

It is also evident from the episode that the technology industry is becoming increasingly dependent on third-party providers, which is why it has reinforced the importance of regular incident response simulations and robust vendor risk management strategies. This ransomware attack from Ingram Micro illustrates the importance of modern cyber operations beyond encrypting data. 

It also illustrates how modern cyber operations are also designed to disrupt interconnected ecosystems, in addition to exerting pressure through theft of data and a systemic impact. As a result of this incident, it was once again reinforced that enterprise security requires preparation, layers of defenses, and supply chain awareness. 

A response of Ingram Micro was to isolate the affected servers and segments of the network in order to contain the intrusion. During this time, the Security Operations Center activated a team within its organization to coordinate remediation and forensic analysis as part of its response. This action corresponds with established incident handling standards, which include the NIST Cybersecurity Framework and ISO 27035 guidelines. 

Currently, investigators are conducting forensic examinations of the ransomware strain, tracking the initial access vectors, and determining whether data has been exfiltrating in order to determine if it was malicious or not. Federal agencies including the FBI Internet Crime Complaint Center and the Cybersecurity and Infrastructure Security Agency have been informed about the investigation. 

In the recovery process, critical systems are restored from verified backups, compromised infrastructure is rebuilt, and before the environment can be returned to production, it is verified that a restored environment does not contain any malicious artifacts.

It is no surprise to security specialists that incidents of this scale are increasingly causing large companies to reevaluate their core controls, such as identity and access management, which includes stronger authentication, tighter access governance, and continuous monitoring.

It is believed that these actions will decrease the risk of unauthorized access and limit the impact of future breaches to a great extent. This Ingram Micro incident is an excellent example of how ransomware has evolved into a technical and systemic threat as well, one that increasingly targets the connective tissue of the global technology economy, rather than isolated enterprises, to increasingly target. 

A breach like the one in question has demonstrated the way that attacks on highly integrated distributors can cascade across industries, exposing information, disrupting operations, and amplifying risks that extend far beyond the initial point of compromise. It is likely that the episode will serve as a benchmark for regulators, enterprises, and security leaders to evaluate resilience within complex supply chains as investigations continue and recovery efforts mature. 

During a period of time when the industry relies heavily on scale, speed, and trust, the attack serves as a strong warning that cybersecurity readiness cannot be judged solely by its internal defenses, but also by its ability to anticipate, absorb, and recover from shocks originating anywhere within the interconnected digital ecosystem as well as to measure its readiness for cybersecurity.

Resecurity Breach Claims Exposed as Honeypot Deception

 

The hackers, who claimed to represent the “Scattered Lapsus$ Hunters” (SLH) group, believed they successfully compromised Resecurity, a cybersecurity firm based in the United States, by exfiltrating their data. Resecurity disputed this by saying they were only able to gain access to their honeypot, which was set up to provide fake data to potential attackers. Such differing accounts of an incident show not only the brazenness of financially driven attackers but also the increasing use of deception techniques by attackers to gain intelligence.

The SLH members propagated their allegations through Telegram, claiming “full access” to the Resecurity systems and the theft of all internal conversations and logs, employee data, threat intelligence reports, and an extensive list of clients and their information. In an attempt to prove the validity of these allegations, the SLH members shared screenshots of Resecurity’s internal “Mattermost” environment, where conversations between the company employees and Pastebin representatives about malicious data on the Pastebin platform were shown. The SLH members described the attack as retaliation against Resecurity, which they believed was trying to socially engineer them by impersonating the buyers of the stolen Vietnamese financial database in order to receive complimentary samples and more information about their activities. 

Adding to this complexity, the renowned threat actor group known as ShinyHunters, known to have been part of the Scattered Lapsus$ Hunters umbrella, later disclaimed their involvement in this incident. This was revealed when a representative of ShinyHunters told a local media outlet that, although they have long claimed to be part of SLH, they did not have any involvement in this incident against Resecurity. This has left many questions regarding how these overlapping groups coordinate their efforts or if SLH uses its association with ShinyHunters to magnify its efforts. 

Resecurity firmly disputes any compromise of its production environment, asserting that the attackers never touched live systems or genuine client data but instead interacted with a purpose-built honeypot. According to a report filed on December 24, it was determined that the initial recon in the vulnerable environment was first spotted on November 21, 2025, with subsequent scanning activities originating from Egyptian IP addresses and utilizing Mullvad VPN. In this regard, in order to monitor the tactics, techniques, and procedures of the attacker, the Digital Forensics and Incident Response (DFIR) team set up an isolated “honeypot” account. 

To make the bait more convincing, Resecurity claims the creation of more than 28,000 fake consumer records and over 190,000 fake payment transactions modeled after the official API structures defined by Stripe. Later in December, the attacker reportedly began automated data exfiltration attacks with more than 188,000 requests made between December 12th and December 24th using a wide range of residential proxy IP addresses. During this period, Resecurity claims that sporadic proxy issues temporarily revealed actual IP addresses, helping analysts identify the attacker’s back-end servers, whose details were later shared with a foreign law enforcement agency that subsequently issued a subpoena against the attacker.

After the initial coverage, the attackers contacted Dissent Doe of DataBreaches.net and provided samples of what they claimed was stolen data, seeking to reinforce their narrative. However, an independent review by DataBreaches concluded there was no evidence that SLH obtained information from any real Resecurity clients, aligning with the company’s assertion that only synthetic records were exposed. Meanwhile, the Telegram channel that originally hosted SLH’s breach claims has since been suspended for violating the platform’s policies, limiting the group’s ability to continue publishing its version of events.

StealC Malware Operators Exposed After XSS Bug Leaks Session and Hardware Data

 

A cross-site scripting (XSS) vulnerability in the web-based management panel used by StealC information-stealing malware operators enabled security researchers to monitor live activity and collect intelligence about the attackers’ systems.

First appearing in early 2023, StealC quickly gained traction on dark web forums due to aggressive promotion and its ability to evade detection while harvesting large volumes of sensitive data. Over time, the malware continued to evolve, with its developer rolling out several upgrades to expand functionality and appeal among cybercriminals.

A major update arrived in April last year with the launch of StealC version 2.0. This release introduced Telegram bot integration for real-time notifications, along with a revamped builder capable of creating customized malware samples based on templates and tailored data-exfiltration rules. Around the same period, the source code for StealC’s administrative panel was leaked, allowing researchers deeper insight into its internal workings.

CyberArk analysts later identified an XSS flaw within the panel that proved particularly revealing. By abusing this weakness, the team was able to gather browser and hardware fingerprints of StealC operators, monitor ongoing sessions, extract session cookies, and remotely take over active panel logins.

“By exploiting the vulnerability, we were able to identify characteristics of the threat actor’s computers, including general location indicators and computer hardware details,” the researchers say.

“Additionally, we were able to retrieve active session cookies, which allowed us to gain control of sessions from our own machines.”

To avoid tipping off attackers and enabling a rapid fix, CyberArk chose not to publish technical specifics about the XSS issue.

The research also details a StealC user tracked as ‘YouTubeTA’, who reportedly took over dormant but legitimate YouTube channels—likely through stolen credentials—and used them to distribute malicious links. Throughout 2025, this actor conducted sustained malware campaigns, amassing more than 5,000 victim logs, roughly 390,000 passwords, and around 30 million cookies, the majority of which were non-sensitive.

Screenshots from the attacker’s control panel suggest that infections largely occurred when victims searched online for pirated versions of Adobe Photoshop and Adobe After Effects. Exploiting the XSS flaw further allowed researchers to profile the attacker’s setup, revealing the use of an Apple M3-based machine configured with English and Russian language settings, operating in an Eastern European time zone, and connecting from Ukraine.

The individual’s real location was exposed after they accessed the StealC panel without a VPN, revealing an IP address tied to Ukrainian internet provider TRK Cable TV.

CyberArk emphasized that while malware-as-a-service (MaaS) platforms allow threat actors to scale operations quickly, they also introduce significant risks by increasing the chances of operational exposure.

BleepingComputer reached out to CyberArk to understand the timing behind the disclosure. Researcher Ari Novick explained that the decision was driven by a recent surge in StealC activity, possibly linked to upheaval surrounding the Lumma malware ecosystem.

"By posting the existence of the XSS we hope to cause at least some disruption in the use of the StealC malware, as operators re-evaluate using it. Since there are now relatively many operators, it seemed like a prime opportunity to potentially cause a fairly significant disruption in the MaaS market."