Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Cybercriminals Target Cloud File-Sharing Services to Access Corporate Data

  Cybersecurity analysts are raising concerns about a growing trend in which corporate cloud-based file-sharing platforms are being leverage...

All the recent news you need to know

Ledger Customer Data Exposed After Global-e Payment Processor Cloud Incident

 

A fresh leak of customer details emerged, linked not to Ledger’s systems but to Global-e - an outside firm handling payments for Ledger.com. News broke when affected users received an alert email from Global-e. That message later appeared online, posted by ZachXBT, a known blockchain tracker using a fake name, via the platform X. 

Unexpectedly, a breach exposed some customer records belonging to Ledger, hosted within Global-e’s online storage system. Personal details, including names and email addresses made up the compromised data, one report confirmed. What remains unclear is the number of people impacted by this event. At no point has Global-e shared specifics about when the intrusion took place.  

Unexpected behavior triggered alerts at Global-e, prompting immediate steps to secure systems while probes began. Investigation followed swiftly after safeguards were applied, verifying unauthorized entry had occurred. Outside experts joined later to examine how the breach unfolded and assess potential data exposure. Findings showed certain personal details - names among them - were viewed without permission. Contact records also appeared in the set of compromised material. What emerged from analysis pointed clearly to limited but sensitive information being reached. 

Following an event involving customer data, Ledger confirmed details in a statement provided to CoinDesk. The issue originated not in Ledger's infrastructure but inside Global-e’s operational environment. Because Global-e functions as the Merchant of Record for certain transactions, it holds responsibility for managing related personal data. That role explains why Global-e sent alerts directly to impacted individuals. Information exposed includes records tied to purchases made on Ledger.com when buyers used Global-e’s payment handling system. 

While limited to specific order-related fields, access was unauthorized and stemmed from weaknesses at Global-e. Though separate entities, their integration during checkout links them in how transactional information flows. Customers involved completed orders between defined dates under these service conditions. Security updates followed after discovery, coordinated across both organizations. Notification timing depended on forensic review completion by third-party experts. Each step aimed at clarity without premature disclosure before full analysis. 

Still, the firm pointed out its own infrastructure - platform, hardware, software - was untouched by the incident. Security around those systems remains intact, according to their statement. What's more, since users keep control of their wallets directly, third parties like Global-e cannot reach seed phrases or asset details. Access to such private keys never existed for external entities. Payment records, meanwhile, stayed outside the scope of what appeared in the leak. 

Few details emerged at first, yet Ledger confirmed working alongside Global-e to deliver clear information to those involved. That setup used by several retailers turned out to be vulnerable, pointing beyond a single company. Updates began flowing after detection, though the impact spread wider than expected across shared infrastructure. 

Coming to light now, this revelation follows earlier security problems connected to Ledger. Back in 2020, a flaw at Shopify - the online store platform they used - led to a leak affecting 270,000 customers’ details. Then, in 2023, another event hit, causing financial damage close to half a million dollars and touching multiple DeFi platforms. Though different in both scale and source, the newest issue highlights how reliance on outside vendors can still pose serious threats when handling purchases and private user information.  

Still, Ledger’s online platforms showed no signs of a live breach on their end, yet warnings about vigilance persist. Though nothing points to internal failures, alerts remind customers to stay alert regardless. Even now, with silence across official posts, guidance leans toward caution just the same.

ESA Confirms Cyber Breach After Hacker Claims 200GB Data Theft

 

The European Space Agency (ESA) has confirmed a major cybersecurity incident in the external servers used for scientific cooperation. The hackers who carried out the operation claim responsibility for the breach in a post in the hacking community site BreachForums and claim that over 200 GB worth of data has been stolen, including source code, API tokens, and credentials. This incident highlights escalating cyber threats to space infrastructure amid growing interconnectedness in the sector 

It is alleged that the incident occurred around December 18, 2025, with an actor using the pseudonym "888" allegedly gaining access to ESA's JIRA and Bitbucket systems for an approximate week's duration. ESA claims that the compromised systems represented a "very small number" of systems not on their main network, which only included unclassified data meant for engineering partnerships. As a result, the agency conducted an investigation, secured the compromised systems, and notified stakeholders, while claiming that no mission critical systems were compromised. 

The leaked data includes CI/CD pipelines, Terraform files, SQL files, configurations, and hardcoded credentials, which have sparked supply chain security concerns. As for the leaked data, it includes screenshots from the breach, which show unauthorized access to private repositories. However, it is unclear whether this data is genuine or not. It is also unclear whether the leaked data is classified or not. As for security experts, it is believed that this data can be used for lateral movements by highly sophisticated attackers, even if it is unclassified. 

Adding to the trouble, the Lapsus$ group said they carried out a separate breach in September 2025, disclosing they exfiltrated 500 GB of data containing sensitive files on spacecraft operations, mission specifics, and contractor information involving partners such as SpaceX and Airbus. The ESA opened a criminal investigation, working with the authorities, however the immediate effects were minimized. The agency has been hit by a string of incidents since 2011, including skimmers placed on merchandise site readers. 

The series of breaches may be indicative of the "loosely coupled" regional space cooperative environment featuring among the ESA 23 member states. Space cybersecurity requirements are rising—as evidenced by open solicitations for security products—incidents like this may foster distrust of global partnerships. Investigations continue on what will be the long-term threats, but there is a pressing need for stronger protection.

Targeted Cyberattack Foiled by Resecurity Honeypot


 

There has been a targeted intrusion attempt against the internal environment of Resecurity in November 2025, which has been revealed in detail by the cyber security company. In order to expose the adversaries behind this attack, the company deliberately turned the attack into a counterintelligence operation by using advanced deception techniques.

In response to a threat actor using a low-privilege employee account in order to gain access to an enterprise network, Resecurity’s incident response team redirected the intrusion into a controlled synthetic data honeypot that resembles a realistic enterprise network within which the intrusion could be detected. 

A real-time analysis of the attackers’ infrastructure, as well as their tradecraft, was not only possible with this move, but it also triggered the involvement of law enforcement after a number of evidences linked the activity to an Egyptian-based threat actor and infrastructure associated with the ShinyHunter cybercrime group, which has subsequently been shown to have claimed responsibility for the data breach falsely. 

Resecurity demonstrated how modern deception platforms, with the help of synthetic datasets generated by artificial intelligence, combined with carefully curated artifacts gathered from previously leaked dark web material, can transform reconnaissance attempts by financially motivated cybercriminals into actionable intelligence.

The active defense strategies are becoming increasingly important in today's cybersecurity operations as they do not expose customer or proprietary data.

The Resecurity team reported that threat actors operating under the nickname "Scattered Lapsus$ Hunters" publicly claimed on Telegram that they had accessed the company's systems and stolen sensitive information, such as employee information, internal communications, threat intelligence reports, client data, and more. This claim has been strongly denied by the firm. 

In addition to the screenshots shared by the group, it was later confirmed that they came from a honeypot environment that had been built specifically for Resecurity instead of Resecurity's production infrastructure. 

On the 21st of November 2025, the company's digital forensics and incident response team observed suspicious probes of publicly available services, as well as targeted attempts to access a restricted employee account. This activity was detected by the company's digital forensics and incident response team. 

There were initial traces of reconnaissance traffic to Egyptian IP addresses, such as 156.193.212.244 and 102.41.112.148. As a result of the use of commercial VPN services, Resecurity shifted from containment to observation, rather than blocking the intrusion.

Defenders created a carefully staged honeytrap account filled with synthetic data in order to observe the attackers' tactics, techniques, and procedures, rather than blocking the intrusion. 

A total of 28,000 fake consumer profiles were created in the decoy environment, along with nearly 190,000 mock payment transactions generated from publicly available patterns that contained fake Stripe records as well as fake email addresses that were derived from credential “combo lists.” 

In order to further enhance the authenticity of the data, Resecurity reactivated a retired Mattermost collaboration platform, and seeded it with outdated 2023 logs, thereby convincing the attackers that the system was indeed genuine. 

There were approximately 188,000 automated requests routed through residential proxy networks in an attempt by the attackers to harvest the synthetic dataset between December 12 and December 24. This effort ultimately failed when repeated connection failures revealed operational security shortcomings and revealed some of the attackers' real infrastructure in the process of repeated connection failures exposing vulnerabilities in the security of the system. 

A recent press release issued by Resecurity denies the breach allegation, stating that the systems cited by the threat actors were never part of its production environment, but were rather deliberately exposed honeypot assets designed to attract and observe malicious activity from a distance.

After receiving external inquiries, the company’s digital forensics and incident response teams first detected reconnaissance activity on November 21, 2025, after a threat actor began probing publicly accessible services on November 20, 2025, in a report published on December 24 and shared with reporters. 

Telemetry gathered early in the investigation revealed a number of indications that the network had been compromised, including connections coming from Egyptian IP addresses, as well as traffic being routed through Mullvas VPN infrastructure. 

A controlled honeypot account has been deployed by Resecurity inside an isolated environment as a response to the attack instead of a move to containment immediately. As a result, the attacker was able to authenticate to and interact with systems populated completely with false employee, customer, and payment information while their actions were closely monitored by Resecurity. 

Specifically, the synthetic datasets were designed to replicate the actual enterprise data structures, including over 190,000 fictitious consumer profiles and over 28,000 dummy payment transactions that were formatted to adhere to Stripe's official API specifications, as defined in the Stripe API documentation. 

In the early months of the operation, the attacker used residential proxy networks extensively to generate more than 188,000 requests for data exfiltration, which occurred between December 12 and December 24 as an automated data exfiltration operation. 

During this period, Resecurity collected detailed telemetry on the adversary's tactics, techniques, and supporting infrastructure, resulting in several operational security failures that were caused by proxy disruptions that briefly exposed confirmed IP addresses, which led to multiple operational security failures. 

As the deception continued, investigators introduced additional synthetic datasets, which led to even more mistakes that narrowed the attribution and helped determine the servers that orchestrated the activity, leading to an increase in errors. 

In the aftermath of sharing the intelligence with law enforcement partners, a foreign agency collaborating with Resecurity issued a subpoena request, which resulted in Resecurity receiving a subpoena. 

Following this initial breach, the attackers continued to make claims on Telegram, and their data was also shared with third-party breach analysts, but these statements, along with the new claims, were found to lack any verifiable evidence of actual compromise of real client systems. Independent review found that no evidence of the breach existed. 

Upon further examination, it was determined that the Telegram channel used to distribute these claims had been suspended, as did follow-on assertions from the ShinyHunters group, which were also determined to be derived from a honeytrap environment.

The actors, unknowingly, gained access to a decoy account and infrastructure, which was enough to confirm their fall into the honeytrap. Nevertheless, the incident demonstrates both the growing sophistication of modern deception technology as well as the importance of embedding them within a broader, more resilient security framework in order to maximize their effectiveness. 

A honeypot and synthetic data environment can be a valuable tool for observing attacker behavior. However, security leaders emphasize that the most effective way to use these tools is to combine them with strong foundational controls, including continuous vulnerability management, zero trust access models, multifactor authentication, employee awareness training, and disciplined network segmentation. 

Resecurity represents an evolution in defensive strategy from a reactive and reactionary model to one where organizations are taking a proactive approach in the fight against cyberthreats by gathering intelligence, disrupting the operations of adversaries, and reducing real-world risk in the process. 

There is no doubt that the ability to observe, mislead, and anticipate hostile activity, before meaningful damage occurs, is becoming an increasingly important element of enterprise defenses in the age of cyber threats as they continue to evolve at an incredible rate.

Together, the episodes present a rare, transparent view of how modern cyber attacks unfold-and how they can be strategically neutralized in order to avoid escalation of risk to data and real systems. 

Ultimately, Resecurity's claims serve more as an illustration of how threat actors are increasingly relying on perception, publicity, and speed to shape narratives before facts are even known to have been uncovered, than they serve as evidence that a successful breach occurred. 

Defenders of the case should take this lesson to heart: visibility and control can play a key role in preventing a crisis. It has become increasingly important for organizations to be able to verify, contextualize, and counter the false claims that are made by their adversaries as they implement technical capabilities combined with psychological tactics in an attempt to breach their systems. 

The Resecurity incident exemplifies how disciplined preparation and intelligence-led defense can help turn an attempted compromise into strategic advantage in an environment where trust and reputation are often the first targets. They do this quiet, methodically, and without revealing what really matters when a compromise occurs.

What Happens When Spyware Hits a Phone and How to Stay Safe

 



Although advanced spyware attacks do not affect most smartphone users, cybersecurity researchers stress that awareness is essential as these tools continue to spread globally. Even individuals who are not public figures are advised to remain cautious.

In December, hundreds of iPhone and Android users received official threat alerts stating that their devices had been targeted by spyware. Shortly after these notifications, Apple and Google released security patches addressing vulnerabilities that experts believe were exploited to install the malware on a small number of phones.

Spyware poses an extreme risk because it allows attackers to monitor nearly every activity on a smartphone. This includes access to calls, messages, keystrokes, screenshots, notifications, and even encrypted platforms such as WhatsApp and Signal. Despite its intrusive capabilities, spyware is usually deployed in targeted operations against journalists, political figures, activists, and business leaders in sensitive industries.

High-profile cases have demonstrated the seriousness of these attacks. Former Amazon chief executive Jeff Bezos and Hanan Elatr, the wife of murdered Saudi dissident Jamal Khashoggi, were both compromised through Pegasus spyware developed by the NSO Group. These incidents illustrate how personal data can be accessed without user awareness.

Spyware activity remains concentrated within these circles, but researchers suggest its reach may be expanding. In early December, Google issued threat notifications and disclosed findings showing that an exploit chain had been used to silently install Predator spyware. Around the same time, the U.S. Cybersecurity and Infrastructure Security Agency warned that attackers were actively exploiting mobile messaging applications using commercial surveillance tools.

One of the most dangerous techniques involved is known as a zero-click attack. In such cases, a device can be infected without the user clicking a link, opening a message, or downloading a file. According to Malwarebytes researcher Pieter Arntz, once infected, attackers can read messages, track keystrokes, capture screenshots, monitor notifications, and access banking applications. Rocky Cole of iVerify adds that spyware can also extract emails and texts, steal credentials, send messages, and access cloud accounts.

Spyware may also spread through malicious links, fake applications, infected images, browser vulnerabilities, or harmful browser extensions. Recorded Future’s Richard LaTulip notes that recent research into malicious extensions shows how tools that appear harmless can function as surveillance mechanisms. These methods, often associated with nation-state actors, are designed to remain hidden and persistent.

Governments and spyware vendors frequently claim such tools are used only for law enforcement or national security. However, Amnesty International researcher Rebecca White states that journalists, activists, and others have been unlawfully targeted worldwide, using spyware as a method of repression. Thai activist Niraphorn Onnkhaow was targeted multiple times during pro-democracy protests between 2020 and 2021, eventually withdrawing from activism due to fears her data could be misused.

Detecting spyware is challenging. Devices may show subtle signs such as overheating, performance issues, or unexpected camera or microphone activation. Official threat alerts from Apple, Google, or Meta should be treated seriously. Leaked private information can also indicate compromise.

To reduce risk, Apple offers Lockdown Mode, which limits certain functions to reduce attack surfaces. Apple security executive Ivan Krstić states that widespread iPhone malware has not been observed outside mercenary spyware campaigns. Apple has also introduced Memory Integrity Enforcement, an always-on protection designed to block memory-based exploits.

Google provides Advanced Protection for Android, enhanced in Android 16 with intrusion logging, USB safeguards, and network restrictions.

Experts recommend avoiding unknown links, limiting app installations, keeping devices updated, avoiding sideloading, and restarting phones periodically. However, confirmed infections often require replacing the device entirely. Organizations such as Amnesty International, Access Now, and Reporters Without Borders offer assistance to individuals who believe they have been targeted.

Security specialists advise staying cautious without allowing fear to disrupt normal device use.

Grok AI Faces Global Backlash Over Nonconsensual Image Manipulation on X

 

A dispute over X's internal AI assistant, Grok, is gaining attention - questions now swirl around permission, safety measures online, yet also how synthetic media tools can be twisted. This tension surfaced when Julie Yukari, a musician aged thirty-one living in Rio de Janeiro, posted a picture of herself unwinding with her cat during New Year’s Eve celebrations. Shortly afterward, individuals on the network started instructing Grok to modify that photograph, swapping her outfit for skimpy beach attire through digital manipulation. 

What started as skepticism soon gave way to shock. Yukari had thought the system wouldn’t act on those inputs - yet it did. Images surfaced, altered, showing her with minimal clothing, spreading fast across the app. She called the episode painful, a moment that exposed quiet vulnerabilities. Consent vanished quietly, replaced by algorithms working inside familiar online spaces. 

A Reuters probe found that Yukari’s situation happens more than once. The organization uncovered multiple examples where Grok produced suggestive pictures of actual persons, some seeming underage. No reply came from X after inquiries about the report’s results. Earlier, xAI - the team developing Grok - downplayed similar claims quickly, calling traditional outlets sources of false information. 

Across the globe, unease is growing over sexually explicit images created by artificial intelligence. Officials in France have sent complaints about X to legal authorities, calling such content unlawful and deeply offensive to women. A similar move came from India’s technology ministry, which warned X it did not stop indecent material from being made or shared online. Meanwhile, agencies in the United States, like the FCC and FTC, chose silence instead of public statements. 

A sudden rise in demands for Grok to modify pictures into suggestive clothing showed up in Reuters' review. Within just ten minutes, over one00 instances appeared - mostly focused on younger females. Often, the system produced overt visual content without hesitation. At times, only part of the request was carried out. A large share vanished quickly from open access, limiting how much could be measured afterward. 

Some time ago, image-editing tools driven by artificial intelligence could already strip clothes off photos, though they mostly stayed on obscure websites or required payment. Now, because Grok is built right into a well-known social network, creating such fake visuals takes almost no work at all. Warnings had been issued earlier to X about launching these kinds of features without tight controls. 

People studying tech impacts and advocacy teams argue this situation followed clearly from those ignored alerts. From a legal standpoint, some specialists claim the event highlights deep flaws in how platforms handle harmful content and manage artificial intelligence. Rather than addressing risks early, observers note that X failed to block offensive inputs during model development while lacking strong safeguards on unauthorized image creation. 

In cases such as Yukari’s, consequences run far beyond digital space - emotions like embarrassment linger long after deletion. Although aware the depictions were fake, she still pulled away socially, weighed down by stigma. Though X hasn’t outlined specific fixes, pressure is rising for tighter rules on generative AI - especially around responsibility when companies release these tools widely. What stands out now is how little clarity exists on who answers for the outcomes.

AI Expert Warns World Is Running Out of Time to Tackle High-Risk AI Revolution

 

AI safety specialist David Dalrymple has warned in no unclear terms that humanity may be running out of time to get ready for the dangers of fast-moving artificial intelligence. When talking to The Guardian, the director of programme at the UK government’s Advanced Research and Invention Agency (ARIA) emphasised that AI development is progressing “really fast,” and that no society can safely take these systems being reliable for granted. He is the latest authoritative figure to add to the escalating global anxiety that deployment is outstripping safety research and governance models. 

Dalrymple contended that the existential risk is from AI systems that can do virtually all economically valuable human work but more quickly, at lower cost and at a higher quality. In his mind, these intellectual systems might “outcompete” humans in the very domains that constitute our control over civilization, society and perhaps even planetary-scale decisions. And not just about losing jobs, but about losing strategic dominance in vital sectors, from security to infrastructure management.

He described a scenario in which AI capabilities race ahead of safety mechanisms, triggering destabilisation across both the security landscape and the broader economy. Dalrymple emphasised an urgent need for more technical research into understanding and controlling the behaviour of advanced AI, particularly as systems become more autonomous and integrated into vital services. Without this work, he suggested, governments and institutions risk deploying tools whose failure modes and emergent properties they barely understand. 

 Dalrymple, who among other things consults with ARIA on creating protections for AI systems used in critical infrastructure like energy grids, warned that it is “very dangerous” for policymakers to believe advanced AI will just work as they want it to. He noted that the science needed to fully guarantee reliability is unlikely to emerge in time, given the intense economic incentives driving rapid deployment. As a result, he argued the “next best” strategy is aggressively focusing on controlling and mitigating the downsides, even if perfect assurance is out of reach. 

The AI expert also said that by late 2026, AI systems may be able to do a full day of R&D, including self-improvement in such AI-related fields as mathematics and computer science. Such an innovation would give a further jolt to AI capabilities, and bring society more deeply into what he described as a “high-risk” transition that civilization is mostly “sleepwalking” into. And while he conceded that unsettling developments can ultimately yield benefits, he said the road we appear to be on is one that holds a lot of peril for if safety continues to lag behind capability.

Featured