Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

OpenAI Faces Court Order to Disclose 20 Million Anonymized ChatGPT Chats


OpenAI, a company that is pushing to redefine how courts balance innovation, privacy, and the enforcement of copyright in the current legal battle over artificial intelligence and intellectual property, has brought a lawsuit challenging a sweeping discovery order. 

It was announced on Wednesday that the artificial intelligence company requested a federal judge to overturn a ruling that requires it to disclose 20 million anonymized ChatGPT conversation logs, warning even de-identified records may reveal sensitive information about users. 

In the current dispute, the New York Times and several other news organizations have filed a lawsuit alleging that OpenAI is violating copyright terms in its large language models by illegally using their content. The claim is that OpenAI has violated its copyright rights by using their copyrighted content. 

A federal district court in New York upheld two discovery orders on January 5, 2026 that required OpenAI to produce a substantial sample of the interactions with ChatGPT by the end of the year, a consequential milestone in an ongoing litigation that is situated at the intersection of copyright law, data privacy, and the emergence of artificial intelligence. 

According to the court's decision, this case concludes that there is a growing willingness by judicial authorities to critically examine the internal data practices of AI developers, while corporations argue that disclosure of this sort could have far-reaching implications for both user trust and the confidentiality of platforms themselves. As part of the controversy, plaintiffs are requesting access to ChatGPT's conversation logs that record both user prompts and the system's response to those prompts. 

Those logs, they argue, are crucial in evaluating claims of copyright infringement as well as OpenAI's asserted defenses, including fair use, since they capture both user prompts and system responses. In July 2025, when OpenAI filed a motion seeking the production of a 120-million-log sample, citing the scale and the privacy concerns involved in the request, it refused.

OpenAI, which maintains billions of logs as part of its normal operations, initially resisted the request. It responded by proposing to produce 20 million conversations, stripped of all personally identifiable information and sensitive information, using a proprietary process that would ensure the data would not be manipulated. 

A reduction of this sample was agreed upon by plaintiffs as an interim measure, however they reserved the right to continue their pursuit of a broader sample if the data were not sufficient. During October 2025, tensions escalated as OpenAI changed its position, offering instead to search for targeted words within the 20-million-log dataset and only to find conversations that directly implicated the plaintiff's work based on those search terms.

In their opinion, limiting disclosure to filtered results would be a better safeguard for user privacy, preventing the exposure of unnecessary unrelated communications. Plaintiffs, however, swiftly rejected this approach, filing a new motion to demand the release of the entire de-identified dataset. 

On November 7, 2025, a U.S. Magistrate Judge Ona Wang sided with the plaintiffs, ordering OpenAI to provide all of the sample data in addition to denying the company's request to reconsider. A judge ruled that obtaining access to both relevant and ostensibly irrelevant logs was necessary in order to conduct a comprehensive and fair analysis of OpenAI's claims. 

Accordingly, even conversations which are not directly referencing copyrighted material can be taken into account by OpenAI when attempting to prove fair use. As part of its assessment of privacy risks, the court deemed that the dataset had been reduced from billions to 20 million records by applying de-identification measures and enforcing a standing protective order, all of which were adequate to mitigate them. 

In light of the fact that the litigation is entering a more consequential phase, Keker Van Nest, Latham & Watkins, and Morrison & Foerster are representing OpenAI in the matter, which is approaching court-imposed production deadlines. 

In light of the fact that the order reflects a broader judicial posture toward artificial intelligence disputes, legal observers have noticed that courts are increasingly willing to compel extensive discovery - even if it is anonymous - to examine the process by which large language models are trained and whether copyrighted material may be involved.

A crucial aspect of this ruling is that it strengthens the procedural avenues for publishers and other content owners to challenge alleged copyright violations by AI developers. The ruling highlights the need for technology companies to be vigilant with their stewardship of large repositories of user-generated data, and the legal risks associated with retaining, processing, and releasing such data. 

Additionally, the dispute has intensified since there have been allegations that OpenAI was not able to suspend certain data deletion practices after the litigation commenced, therefore perhaps endangering evidence relevant to claims that some users may have bypassed publisher paywalls through their use of OpenAI products. 

As a result of the deletions, plaintiffs claim that they disproportionately affected free and subscription-tier user records, raising concerns about whether evidence preservation obligations were met fully. The company, which has been named as a co-defendant in the case, has been required to produce more than eight million anonymized Copilot interaction logs in response to the lawsuit and has not faced similar data preservation complaints.

A statement by Dr. Ilia Kolochenko, CEO of ImmuniWeb, on the implications of the ruling was given by CybersecurityNews. He said that while the ruling represents a significant legal setback for OpenAI, it could also embolden other plaintiffs to pursue similar discovery strategies or take advantage of stronger settlement positions in parallel proceedings. 

In response to the allegations, several courts have requested a deeper investigation into OpenAI's internal data governance practices, including a request for injunctions preventing further deletions until it is clear what remains and what is potentially recoverable and what can be done. Aside from the courtroom, the case has been accompanied by an intensifying investor scrutiny that has swept the artificial intelligence industry nationwide. 

In the midst of companies like SpaceX and Anthropic preparing for a possible public offering at a valuation that could reach hundreds of billions of dollars, market confidence is becoming increasingly dependent upon the ability of companies to cope with regulatory exposure, rising operational costs, and competitive pressures associated with rapid artificial intelligence development. 

Meanwhile, speculation around strategic acquisitions that could reshape the competitive landscape continues to abound in the industry. The fact that reports suggest OpenAI is exploring Pinterest may highlight the strategic value that large amounts of user interaction data have for enhancing product search capabilities and increasing ad revenue—both of which are increasingly critical considerations in the context of the competition between major technology companies for real-time consumer engagement and data-driven growth.

In view of the detailed allegations made by the news organizations, the litigation has gained added urgency due to the fact that a significant volume of potentially relevant data has been destroyed as a consequence of OpenAI's failure to preserve key evidence after the lawsuit was filed. 

A court filing indicates that plaintiffs learned nearly 11 months ago that large quantities of ChatGPT output logs, which reportedly affected a considerable number of Free, Pro, and Plus user conversations, had been deleted at an alarming rate after the suit was filed, and they were reportedly doing so at a disproportionately high rate. 

It is argued by plaintiffs that users trying to circumvent paywalls were more likely to enable chat deletion, which indicates this category of data is most likely to contain infringing material. Furthermore, the filings assert that despite OpenAI's attempt to justify the deletion of approximately one-third of all user conversations after the New York Times' complaint, OpenAI failed to provide any rationale other than citing what appeared to be an anomalous drop in usage during the period around the New Year of 2024. 

While news organizations have alleged OpenAI has continued routine deletion practices without implementing litigation holds despite two additional spikes in mass deletions that have been attributed to technical issues, they have selectively retained outputs relating to accounts mentioned in the publishers' complaints and continue to do so. 

During a testimony by OpenAI's associate general counsel, Mike Trinh, plaintiffs argue that the trial documents preserved by OpenAI substantiate the defenses of OpenAI, whereas the records that could substantiate the claims of third parties were not preserved. 

According to the researchers, the precise extent of the loss of the data remains unclear, because OpenAI still refuses to disclose even basic details about what it does and does not erase, an approach that they believe contrasts with Microsoft's ability to preserve Copilot log files without having to go through similar difficulties.

Consequently, as a result of Microsoft's failure to produce searchable Copilot logs, and in light of OpenAI's deletion of mass amounts of data, the news organizations are seeking a court order for Microsoft to produce searchable Copilot logs as soon as possible. 

It has also been requested that the court maintain the existing preservation orders which prevent further permanent deletions of output data as well as to compel OpenAI to accurately reflect the extent to which output data has been destroyed across the company's products as well as clarify whether any of that information can be restored and examined for its legal purposes.

How Generative AI Is Accelerating Password Attacks on Active Directory

 

Active Directory remains the backbone of identity management for most organizations, which is why it continues to be a prime target for cyberattacks. What has shifted is not the focus on Active Directory itself, but the speed and efficiency with which attackers can now compromise it.

The rise of generative AI has dramatically reduced the cost and complexity of password-based attacks. Tasks that once demanded advanced expertise and substantial computing resources can now be executed far more easily and at scale.

Tools such as PassGAN mark a significant evolution in password-cracking techniques. Instead of relying on static wordlists or random brute-force attempts, these systems use adversarial learning to understand how people actually create passwords. With every iteration, the model refines its predictions based on real-world behavior.

The impact is concerning. Research indicates that PassGAN can crack 51% of commonly used passwords in under one minute and 81% within a month. The pace at which these models improve only increases the risk.

When trained using organization-specific breach data, public social media activity, or information from company websites, AI models can produce highly targeted password guesses that closely mirror employee habits.

How generative AI is reshaping password attack methods

Earlier password attacks followed predictable workflows. Attackers relied on dictionary lists, applied rule-based tweaks—such as replacing letters with symbols or appending numbers—and waited for successful matches. This approach was slow and computationally expensive.
  • Pattern recognition at scale: Machine learning systems identify nuanced behaviors in password creation, including keyboard habits, substitutions, and the use of personal references. Instead of wasting resources on random guesses, attackers concentrate computing power on the most statistically likely passwords.
  • Smart credential variation: When leaked credentials are obtained from external breaches, AI can generate environment-specific variations. If “Summer2024!” worked elsewhere, the model can intelligently test related versions such as “Winter2025!” or “Spring2025!” rather than guessing blindly.
  • Automated intelligence gathering: Large language models can rapidly process publicly available data—press releases, LinkedIn profiles, product names—and weave that context into phishing campaigns and password spray attacks. What once took hours of manual research can now be completed in minutes.
  • Reduced technical barriers: Pre-trained AI models and accessible cloud infrastructure mean attackers no longer need specialized skills or costly hardware. The increased availability of high-performance consumer GPUs has unintentionally strengthened attackers’ capabilities, especially when organizations rent out unused GPU capacity.
Today, for roughly $5 per hour, attackers can rent eight RTX 5090 GPUs capable of cracking bcrypt hashes about 65% faster than previous generations.

Even when strong hashing algorithms and elevated cost factors are used, the sheer volume of password guesses now possible far exceeds what was realistic just a few years ago. Combined with AI-generated, high-probability guesses, the time needed to break weak or moderately strong passwords has dropped significantly.

Why traditional password policies are no longer enough

Many Active Directory password rules were designed before AI-driven threats became mainstream. Common complexity requirements—uppercase letters, lowercase letters, numbers, and symbols—often result in predictable structures that AI models are well-equipped to exploit.

"Password123!" meets complexity rules but follows a pattern that generative models can instantly recognize.

Similarly, enforced 90-day password rotations have lost much of their defensive value. Users frequently make minor, predictable changes such as adjusting numbers or referencing seasons. AI systems trained on breach data can anticipate these habits and test them during credential stuffing attacks.

While basic multi-factor authentication (MFA) adds protection, it does not eliminate the risks posed by compromised passwords. If attackers bypass MFA through tactics like social engineering, session hijacking, or MFA fatigue, access to Active Directory may still be possible.

Defending Active Directory against AI-assisted attacks

Countering AI-enhanced threats requires moving beyond compliance-driven controls and focusing on how passwords fail in real-world attacks. Password length is often more effective than complexity alone.

AI models struggle more with long, random passphrases than with short, symbol-heavy strings. An 18-character passphrase built from unrelated words presents a much stronger defense than an 8-character complex password.

Equally critical is visibility into whether employee passwords have already appeared in breach datasets. If a password exists in an attacker’s training data, hashing strength becomes irrelevant—the attacker simply uses the known credential.

Specops Password Policy and Breached Password Protection help organizations defend against over 4 billion known unique compromised passwords, including those that technically meet complexity rules but have already been stolen by malware.

The solution updates daily using real-world attack intelligence, ensuring protection against newly exposed credentials. Custom dictionaries that block company-specific terminology—such as product names, internal jargon, and brand references—further reduce the effectiveness of AI-driven reconnaissance.

When combined with passphrase support and robust length requirements, these measures significantly increase resistance to AI-generated password guessing.

Before applying new controls, organizations should assess their existing exposure. Specops Password Auditor provides a free, read-only scan of Active Directory to identify weak passwords, compromised credentials, and policy gaps—without altering the environment.

This assessment helps pinpoint where AI-powered attacks are most likely to succeed.

Generative AI has fundamentally shifted the balance of effort in password attacks, giving adversaries a clear advantage.

The real question is no longer whether defenses need to be strengthened, but whether organizations will act before their credentials appear in the next breach.

Suspicious Polymarket Bets Spark Insider Trading Fears After Maduro’s Capture

 

A sudden, massive bet surfaced just ahead of a major political development involving Venezuela’s leader. Days prior to Donald Trump revealing that Nicolás Maduro had been seized by U.S. authorities, an individual on Polymarket placed a highly profitable position. That trade turned a substantial gain almost instantly after the news broke. Suspicion now centers on how the timing could have been so precise. Information not yet public might have influenced the decision. The incident casts doubt on who truly knows what - and when - in digital betting arenas. Profits like these do not typically emerge without some edge. 

Hours before Trump spoke on Saturday, predictions about Maduro losing control by late January jumped fast on Polymarket. A single user, active for less than a month, made four distinct moves tied to Venezuela's political situation. That player started with $32,537 and ended with over $436,000 in returns. Instead of a name, only a digital wallet marks the profile. Who actually placed those bets has not come to light. 

That Friday afternoon, market signals began shifting - quietly at first. Come late evening, chances of Maduro being ousted edged up to 11%, starting from only 6.5% earlier. Then, overnight into January 3, something sharper unfolded. Activity picked up fast, right before news broke. Word arrived via a post: Trump claimed Maduro was under U.S. arrest. Traders appear to have moved quickly, moments prior. Their actions hint at advance awareness - or sharp guesswork - as prices reacted well before confirmation surfaced. Despite repeated attempts, Polymarket offered no prompt reply regarding the odd betting patterns. 

Still, unease is growing among regulators and lawmakers. According to Dennis Kelleher - who leads Better Markets, an independent organization focused on financial oversight - the bet carries every sign of being rooted in privileged knowledge Not just one trader walked away with gains. Others on Polymarket also pulled in sizable returns - tens of thousands - in the window before news broke. That timing hints at information spreading earlier than expected. Some clues likely slipped out ahead of formal releases. One episode sparked concern among American legislators. 

On Monday, New York's Representative Ritchie Torres - affiliated with the Democratic Party - filed a bill targeting insider activity by public officials in forecast-based trading platforms. Should such individuals hold significant details not yet disclosed, involvement in these wagers would be prohibited under his plan. This move surfaces amid broader scrutiny over how loosely governed these speculative arenas remain. Prediction markets like Polymarket and Kalshi gained traction fast across the U.S., letting people bet on politics, economies, or world events. 

When the 2024 presidential race heated up, millions flowed into these sites - adding up quickly. Insider knowledge trades face strict rules on Wall Street, yet forecasting platforms often escape similar control. Under Biden, authorities turned closer attention to these markets, increasing pressure across the sector. When Trump returned to influence, conditions shifted, opening space for lighter supervision. At Kalshi and Polymarket, leadership includes Donald Trump Jr., serving behind the scenes in guiding roles. 

Though Kalshi clearly prohibits insider trading - even among government staff using classified details - the Maduro wagering debate reveals regulatory struggles. Prediction platforms increasingly complicate distinctions, merging guesswork, uneven knowledge, then outright ethical breaches without clear boundaries.

Nvidia Introduces New AI Platform to Advance Self-driving Vehicle Technology

 



Nvidia is cementing its presence in the autonomous vehicle space by introducing a new artificial intelligence platform designed to help cars make decisions in complex, real-world conditions. The move reflects the company’s broader strategy to take AI beyond digital tools and embed it into physical systems that operate in public environments.

The platform, named Alpamayo, was introduced by Nvidia chief executive Jensen Huang during a keynote address at the Consumer Electronics Show in Las Vegas. According to the company, the system is built to help self-driving vehicles reason through situations rather than simply respond to sensor inputs. This approach is intended to improve safety, particularly in unpredictable traffic conditions where human judgment is often required.

Nvidia says Alpamayo enables vehicles to manage rare driving scenarios, operate smoothly in dense urban settings, and provide explanations for their actions. By allowing a car to communicate what it intends to do and why, the company aims to address long-standing concerns around transparency and trust in autonomous driving technology.

As part of this effort, Nvidia confirmed a collaboration with Mercedes-Benz to develop a fully driverless vehicle powered by the new platform. The company stated that the vehicle is expected to launch first in the United States within the next few months, followed by expansion into European and Asian markets.

Although Nvidia is widely known for the chips that support today’s AI boom, much of the public focus has remained on software applications such as generative AI systems. Industry attention is now shifting toward physical uses of AI, including vehicles and robotics, where decision-making errors can have serious consequences.

Huang noted that Nvidia’s work on autonomous systems has provided valuable insight into building large-scale robotic platforms. He suggested that physical AI is approaching a turning point similar to the rapid rise of conversational AI tools in recent years.

A demonstration shown at the event featured a Mercedes-Benz vehicle navigating the streets of San Francisco without driver input, while a passenger remained seated behind the wheel with their hands off. Nvidia explained that the system was trained using human driving behavior and continuously evaluates each situation before acting, while also explaining its decisions in real time.

Nvidia also made the Alpamayo model openly available, releasing its core code on the machine learning platform Hugging Face. The company said this would allow researchers and developers to freely access and retrain the system, potentially accelerating progress across the autonomous vehicle industry.

The announcement places Nvidia in closer competition with companies already offering advanced driver-assistance and autonomous driving systems. Industry observers note that while achieving high levels of accuracy is possible, addressing rare and unusual driving scenarios remains a major technical hurdle.

Nvidia further revealed plans to introduce a robotaxi service next year in partnership with another company, although it declined to disclose the partner’s identity or the locations where the service will operate.

The company currently holds the position of the world’s most valuable publicly listed firm, with a market capitalization exceeding 4.5 trillion dollars, or roughly £3.3 trillion. It briefly became the first company to reach a valuation of 5 trillion dollars in October, before losing some value amid investor concerns that expectations around AI demand may be inflated.

Separately, Nvidia confirmed that its next-generation Rubin AI chips are already being manufactured and are scheduled for release later this year. The company said these chips are designed to deliver strong computing performance while using less energy, which could help reduce the cost of developing and deploying AI systems.

Chrome WebView Flaw Lets Hackers Bypass Security, Update Urgently Advised

 

Google has rolled out an urgent security fix for the Chrome browser to address a high severity flaw in the browser’s WebView tag. According to the tech firm, the flaw allows hackers to evade major browser security features to gain access to user data. Identified as CVE-2026-0628, the vulnerability in the browser occurs due to inadequate policy enforcement in the browser’s WebView tag. 

WebView is a very common feature in applications, and its primary purpose is to display web pages within those applications without having to launch a web browser. Therefore, it becomes a major entry point for hackers if not handled appropriately. This weakness in WebView has a high potential to cause malicious web content to transcend its security boundaries and compromise any sensitive data that applications within those security boundaries are processing. 

To fix the issue, Google has released Chrome version 143.0.7499.192/.193, targeting Windows and Mac users, as well as Linux users, through the stable channel, denoted as version 143.0.7499.192. However, users should not expect to get the update immediately, as it will be rolled out over the next few days and weeks. Instead, users should manually check and install the update as quickly as possible. Until a majority of users have installed the patch, Google will not release detailed information regarding the vulnerability, as this will prevent hackers from exploiting the problem.

End users are strongly advised to update Chrome by navigating to Settings > Help > About Google Chrome, where the browser will automatically look for and install the latest security fixes. Organizations managing fleets of Chrome installations should prioritize rapid deployment of this patch across their infrastructure to minimize exposure in WebView‑dependent applications. Failing to update promptly could leave both consumer and enterprise applications open to targeted attacks leveraging this vulnerability. 

Additionally, Google credits external security researchers who reported the bug and points to its continued investment in high-fidelity detectors such as AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, Control Flow Integrity, libFuzzer, AFL to find bugs in early stages. The company also reiterates the importance of its bug bounty program, and invites the security community to responsibly disclose vulnerabilities to help make Chrome more secure for billions of users. This event goes to show that continual collaboration between vendors and researchers is the key to keeping pace with emerging threats.

Lego’s Move Into Smart Toys Faces Scrutiny From Play Professionals


 

In the wake of its unveiling of the company's smart brick technology, LEGO is seeking to reassure critics who argue that the initiative could undermine the company's commitment to hands-on, imaginative play as well as its longstanding history of innovation. 

A key announcement by LEGO has signaled a significant shift in its product strategy. Among industry observers as well as play experts, this announcement has sparked an early debate about whether the addition of digital intelligence into LEGO bricks could lead to a shift away from its traditional brick foundation. 

A few weeks ago, Federico Begher, LEGO's Senior Vice President of Product and New Business, addressed these concerns in an interview with IGN, in which he explained that the introduction of smart elements is a significant milestone that has been carefully considered by LEGO for many years, one that aims to enhance, rather than replace, LEGO's tactile creativity, which has characterized the brand for generations. 

With the launch of the new Smart Bricks, LEGO has introduced one of the most significant product developments in its history, and this position places the company in a unique position to reinvent the way its iconic building system interacts with a new generation of players. 

In the technology, which was introduced at CES 2026, sound, light, and motion-responsive elements are embedded directly into bricks, allowing structures to be responsive to touch as well as movement dynamically. 

During the announcement, LEGO executives framed the initiative as a natural extension of its creative ethos, with the intention of enticing children to go beyond static construction of objects through designing interactive models that can be programmed and adapted in real time, leveraging the brand's creative ethos.

There has been a great deal of enthusiasm for the approach as a way to encourage children to learn digital literacy as well as problem-solving at an early age, however education and child-development specialists have also been expressing measured reactions to it. 

Some have warned that increased electronic use may alter the tactile, open-ended nature of traditional brick-based play, despite others recognizing that it is capable of expanding the educational possibilities available to children. 

There is no denying that the core of LEGO's Smart Play ecosystem is a newly developed Smart Brick that replicates the dimensions of the familiar 2x4 bricks, but combines them with a variety of embedded electronics that are what enable Smart Play to work. 

Besides containing a custom microchip, the brick also contains motion and light sensors, orientation detection, integrated LEDs, and a compact speaker, forming the core of a wider system which also includes Smart Minifigures and Smart Tags, which all contain a distinct digital identifier that is distinct from the rest. 

Whenever these elements are combined or brought into proximity with each other, the Smart Brick recognizes them and performs predefined behaviors or lighting effects as a result of recognizing them. 

There is no need for internet connectivity, cloud-based processing, or companion applications to establish interactions between multiple Smart Bricks in order to coordinate responses, as the BrickNet protocol is a proprietary local wireless protocol, allowing coordinated responses without the need for internet access.

In spite of occasional mention of artificial intelligence, LEGO has emphasized that the system relies on on-device logic and not adaptive or generative models, delivering consistent and predictable responses that are meant to complement and enhance traditional hands-on play, not replace it. 

It is possible for Smart Bricks to respond to simple physical interactions with the system, in which directional changes, impacts, or proximity trigger visual and audio cues that are predetermined. Smart Tags can provide context storytelling elements that guide play scenarios with flashing lights and sound effects when a model falls, while falling models can trigger flashing lights and sound effects when they are attached to the model. 

Academics have expressed cautious praise for this combination of digital responsiveness and tangible construction. It is the experience of Professor Andrew Manches, a specialist in children and technology at the University of Edinburgh, to describe the system as technologically advanced, yet he added that imaginative play ultimately relies on a child's ability to develop narratives on their own rather than relying on scripted prompts. 

Smart Bricks are scheduled to be released by LEGO on March 1, 2026, with Star Wars-themed sets being the first to be released, with preorders beginning January 9 in the company's retail channels and select partners.

The electronic components add a premium quality to the products, ranging from entry-level sets priced under $100 to large collections priced over $150, thereby positioning the products as premium items. Some child advocacy groups have expressed concerns the preprogrammed responses in LEGO's BrickNet system could subtly restrict creative freedom or introduce privacy risks. 

However, LEGO maintains that its offline and encrypted system avoids many of the vulnerabilities associated with app-dependent smart toys that rely on internet connections. There have been gradual introductions of interactive elements into the company's portfolio in a bid to balance technological innovation with the enduring appeal of physical, open-ended play that has dominated the company's digital strategy as a whole. 

While the debate over the Smart Bricks continues, there is a more fundamental question of how the world's largest toy maker is going to manage the conflict between tradition and innovation. 

There are no plans in the near future to replace classic bricks with LEGO's Smart Play system, instead, LEGO CEOs insist that the technology is designed primarily to add a layer of benefit to classic bricks rather than replace them, positioning the technology as a complimentary layer that families can either choose to engage with or ignore. 

With the company choosing to keep the system fully offline and avoiding app-dependency in order to address concerns regarding data security and privacy as they have increasingly shaped conversations about connected toys, the company has attempted to address the privacy concerns. 

In accordance with industry analysts, Lego's premium pricing and phased rollout, starting with internationally popular licensed themes, suggest that the company is taking a market-tested approach rather than undergoing a wholesale change in its identity in order to make room for more premium products. 

A key factor that will determine whether Smart Bricks are successful over the long term will be whether they can earn the trust of parents, educators, and children as soon as they enter homes later this year. By establishing LEGO's reputation as a place to foster creativity and adapt to the expectations of a digitally-native generation, LEGO is reinforcing this reputation.

Android Malware Uses Artificial Intelligence to Secretly Generate Ad Clicks

 


Security researchers have identified a new category of Android malware that uses artificial intelligence to carry out advertising fraud without the user’s knowledge. The malicious software belongs to a recently observed group of click-fraud trojans that rely on machine learning rather than traditional scripted techniques.

Instead of using hard-coded JavaScript instructions to interact with web pages, this malware analyzes advertisements visually. By examining what appears on the screen, it can decide where to tap, closely imitating normal user behavior. This approach allows the malware to function even when ads frequently change layout, include video content, or are embedded inside iframes, which often disrupt older click-fraud methods.

The threat actors behind the operation are using TensorFlow.js, an open-source machine learning library developed by Google. The framework allows trained AI models to run inside web browsers or server environments through JavaScript. In this case, the models are loaded remotely and used to process screenshots taken from an embedded browser.

Researchers from mobile security firm Dr.Web reported that the malware has been distributed through GetApps, Xiaomi’s official application store. The infected apps are mainly games. In several cases, the applications were initially uploaded without harmful functionality and later received malicious components through software updates.

Once active, the malware can run in what researchers describe as a “phantom” mode. In this mode, it opens a hidden browser based on Android’s WebView component. This browser loads a webpage containing advertisements and a JavaScript file designed to automate interactions. The browser operates on a virtual screen that is not visible to the device owner. Screenshots of this screen are repeatedly captured and sent to the AI model, which identifies relevant ad elements and triggers taps that appear legitimate.

A second operational mode, referred to as “signalling,” gives attackers direct control. Using WebRTC technology, the malware streams a live video feed of the hidden browser to the threat actor. This allows them to perform actions such as tapping, scrolling, or entering text in real time.

Dr.Web identified multiple infected games hosted on Xiaomi’s platform, including titles with tens of thousands of downloads. Beyond official app stores, the malware has also been found in modified versions of popular streaming applications distributed through third-party APK websites, Telegram channels, and a Discord server with a large subscriber base. Many of these apps function as expected, which reduces user suspicion.

Although this activity does not directly target personal data, it still affects users through increased battery drain, higher mobile data usage, and faster device wear. For cybercriminals, however, covert ad fraud remains a profitable operation.

Security experts advise Android users to avoid downloading apps from unofficial sources and to be cautious of altered versions of well-known apps that promise free access to paid features.

WhatsApp Bug Leads to Exposure of User Metadata

 


The Meta organization has begun to address a number of vulnerabilities in WhatsApp that expose sensitive user information. These vulnerabilities indicate that, even when platforms are encrypted, they can inadvertently reveal critical device details. 

The vulnerabilities are caused by the messaging service's multi-device architecture, which allows subtle implementation differences to reveal whether the user is using an Android or an iOS device, while still maintaining end-to-end encryption for message content. 

According to security researchers, this type of capability, which helps identify or identify operating systems by their fingerprints, is of particular value to advanced threat actors. These actors often choose WhatsApp-with its more than three billion active users per month-as their preferred channel for delivering advanced spyware to their customers.

It was discovered that attackers are able to exploit zero-day flaws that allow them to passively query WhatsApp servers for cryptographical session details without being able to interact with the victim, using variations in key identifiers, such as Signed Pre-Keys and One-Time Pre-Keys, in order to determine the target platform. 

By utilizing this intelligence, adversaries can tailor exploits to the specific needs of their victims, deploying Android-specific malware only to compatible devices, while avoiding detection by others, emphasizing the difficulties in masking metadata signatures even within encrypted communication ecosystems despite this intelligence.

It has been warned that threat actors who abuse WhatsApp as an attack vector may be able to passively query WhatsApp's servers for encryption-related content, which would allow them to obtain information regarding devices without the need for user interaction. With this capability, adversaries can accurately determine the operating system of a victim, with recent findings suggesting that subtle differences in key ID generation can be used to reliably differentiate between Android and iOS devices. 

APT operations that are targeted at advanced persistent threats (APTs) often involve the deployment of zero-day exploits tailored to specific platforms. However, deploying these exploits to inappropriate devices can not only result in the failure of the attack, but may expose highly sensitive attack infrastructure worth millions of dollars. 

 Furthermore, the study concluded that there may also be a risk of data theft, as it estimated that data linked to at least 3.5 billion registered phone numbers could possibly be accessed, a number that may include inactive or recycled accounts as well. 

Besides cryptographic identifiers, the accessible information included phone numbers, timestamps, “About” field text, profile photos, and public encryption keys, which prompted researchers to warn against the possibility that, in the wrong hands, this dataset could have led to one of the largest data leaks ever documented in human history. 

Among the most concerning findings of the study was the fact that more than half of the accounts displayed photos, with a majority displaying identifiable faces. There is a strong possibility that this will lead to large-scale abuse, such as reverse phonebook services using facial recognition technology.

It was pointed out by Gabriel Gegenhuber, the study's lead author, that the systems should not be allowed to handle such a large number of rapid queries from a single source as they might otherwise. He pointed out that Meta tightened the rate limiting on WhatsApp's web client in October 2025 after the problem had been reported through the company's bug bounty program earlier that year, which led to a change in rate limits on WhatsApp's web client. 

It has been determined by further technical analysis that attackers can obtain detailed insights about a user's WhatsApp environment by exploiting predictable patterns in the application's encryption key identifiers that give detailed insight into a user's environment. 

Research recently demonstrated the possibility of tracing the primary device of a user, identifying the operating system of each linked device, estimating the relative age of each connected device, and determining whether WhatsApp is accessed through a mobile application or a desktop web client, based on if WhatsApp is accessed through either app. 

A number of conclusions were drawn from the history of deterministic values assigned to certain encryption key IDs that have effectively served as device fingerprints for decades. It is Tal Be'ery, co-founder and chief technology officer of Zengo cryptocurrency wallet, who was one of the researchers leading this research, who, along with other experts, shared their findings with Meta. 

As early reports indicated little response from the company, Be'ery observed later that the company began to mitigate the issue by introducing a randomization system for key ID values, specifically on Android devices, which seemed to have worked. He was able to confirm that these changes represent progress when he used a non-public fingerprinting tool to test the system, even though the technique was only partially effective. 

An article by Be'ery published recently and a demonstration that followed showed that attackers are still able to distinguish Android and iPhone devices based on One-Time Pre-Key identifiers with a high degree of confidence. 

It is cited in the article that the iPhone's initial values are low with gradual increments as opposed to Android's broader, randomized range, which is much larger. However, he acknowledged that Meta had recognized the issue as a legitimate security and privacy concern and welcomed the steps taken to reduce its impact despite these limitations.

It is important to emphasize, therefore, that the study highlights WhatsApp metadata exposed to the outside world is not a theoretical worry, but a real security risk with wide-ranging consequences. When advanced attacks take place, metadata plays a key role in reconnaissance, providing adversaries with the ability to identify targets, differentiate between iOS and Android environments, select compatible exploits, and reduce the number of unsuccessful intrusion attempts, thereby allowing them to succeed with social engineering, spear-phishing, and exploit chain attacks as a whole.

In a large-scale scenario, such data can be fed into OSINT applications and AI-driven profiling tools, which allows for significant cost reduction on the selection of targets while also enhancing the precision of malicious operations when applied at scale. Moreover, researchers warned of the dangers associated with public profiles photos, stating that by being able to tie facial images to phone numbers on a mass scale, specialists might be able to create facial recognition-based reverse phonebook services based on the ability to link facial recognition to phone numbers.

A significant portion of these risks may be magnified for those with a high exposure rate or who are in regulated environments, such as journalists, activists, and professionals who perform sensitive tasks, where metadata correlation may result in physical or personal harm. 

It was learned from the study that millions of accounts are registered in jurisdictions where WhatsApp has been banned officially, raising concerns that using WhatsApp in these regions may have legal and/or persecutorial repercussions. It is important to note that this study highlights the structural problems that WhatsApp's centralized architecture creates, resulting in a single point of failure that affects billions of users, limits independent oversight, and leaves individuals with little control over their data. 

As a result, the research highlights a number of structural issues inherent in WhatsApp’s centralized architecture. A number of researchers recommend that users should take practical steps in order to reduce exposure until deeper structural safeguards are implemented or alternative platforms are adopted. 

Some of those steps include restricting profile photo visibility, minimizing personal details in public fields, avoiding identifiable images when appropriate, reviewing connected devices, limiting data synchronization, and utilizing more privacy-preserving messaging services for sensitive communication, just to name a few.

In sum, the findings of the research suggest that there is a widening gap between the protections users expect from encrypted messaging platforms and the less visible risks related to metadata leaks. It is evident from Meta’s recent mitigation efforts that the issue has been acknowledged, but that the persistance of device fingerprinting techniques illustrates that large and globally scaled systems can be difficult to completely eradicate side-channel signals. 

The fact remains that even limited metadata leakage on a platform that functions as a primary communication channel for governments, businesses, and civil society organizations alike may have outsized consequences if it is aggregated or exploited by capable adversaries. 

It is also important to recognize that encryption alone is not sufficient to guarantee privacy when the surrounding technical and architectural decisions allow the inference of contextual information. 

WhatsApp’s experience serves as a reminder that, as regulators, researchers, and users increasingly scrutinize the security boundaries of dominant messaging services, it is imperative that strong cryptography be used to protect billions of users as well as continuous transparency and rigorous oversight. Metadata needs to be treated as a first-class security concern, rather than something that can't be avoided.

Hackers Abuse Vulnerable Training Web Apps to Breach Enterprise Cloud Environments

 

Threat actors are actively taking advantage of poorly secured web applications designed for security training and internal penetration testing to infiltrate cloud infrastructures belonging to Fortune 500 firms and cybersecurity vendors. These applications include deliberately vulnerable platforms such as DVWA, OWASP Juice Shop, Hackazon, and bWAPP.

Research conducted by automated penetration testing firm Pentera reveals that attackers are using these exposed apps as entry points to compromise cloud systems. Once inside, adversaries have been observed deploying cryptocurrency miners, installing webshells, and moving laterally toward more sensitive assets.

Because these testing applications are intentionally insecure, exposing them to the public internet—especially when they run under highly privileged cloud accounts—creates significant security risks. Pentera identified 1,926 active vulnerable applications accessible online, many tied to excessive Identity and Access Management (IAM) permissions and hosted across AWS, Google Cloud Platform (GCP), and Microsoft Azure environments.

Pentera stated that the affected deployments belonged to several Fortune 500 organizations, including Cloudflare, F5, and Palo Alto Networks. The researchers disclosed their findings to the impacted companies, which have since remediated the issues. Analysis showed that many instances leaked cloud credentials, failed to implement least-privilege access controls, and more than half still relied on default login details—making them easy targets for attackers.

The exposed credentials could allow threat actors to fully access S3 buckets, Google Cloud Storage, and Azure Blob Storage, as well as read and write secrets, interact with container registries, and obtain administrative-level control over cloud environments. Pentera emphasized that these risks were already being exploited in real-world attacks.

"During the investigation, we discovered clear evidence that attackers are actively exploiting these exact attack vectors in the wild – deploying crypto miners, webshells, and persistence mechanisms on compromised systems," the researchers said.

Signs of compromise were confirmed when analysts examined multiple misconfigured applications. In some cases, they were able to establish shell access and analyze data to identify system ownership and attacker activity.

"Out of the 616 discovered DVWA instances, around 20% were found to contain artifacts deployed by malicious actors," Pentera says in the report.

The malicious activity largely involved the use of the XMRig mining tool, which silently mined Monero (XMR) in the background. Investigators also uncovered a persistence mechanism built around a script named ‘watchdog.sh’. When removed, the script could recreate itself from a base64-encoded backup and re-download XMRig from GitHub.

Additionally, the script retrieved encrypted tools from a Dropbox account using AES-256 encryption and terminated rival miners on infected systems. Other incidents involved a PHP-based webshell called ‘filemanager.php’, capable of file manipulation and remote command execution.

This webshell contained embedded authentication credentials and was configured with the Europe/Minsk (UTC+3) timezone, potentially offering insight into the attackers’ location.

Pentera noted that these malicious components were discovered only after Cloudflare, F5, and Palo Alto Networks had been notified and had already resolved the underlying exposures.

To reduce risk, Pentera advises organizations to keep an accurate inventory of all cloud assets—including test and training applications—and ensure they are isolated from production environments. The firm also recommends enforcing least-privilege IAM permissions, removing default credentials, and setting expiration policies for temporary cloud resources.

The full Pentera report outlines the investigation process in detail and documents the techniques and tools used to locate vulnerable applications, probe compromised systems, and identify affected organizations.

Fortinet Firewalls Targeted as Attackers Bypass Patch for Critical FortiGate Flaw

 

Critical vulnerabilities in FortiGate systems continue to be exploited, even after fixes were deployed, users now confirm. Though updates arrived aiming to correct the problem labeled CVE-2025-59718, they appear incomplete. Authentication safeguards can still be sidestepped by threat actors taking advantage of the gap. This suggests earlier remedies failed to close every loophole tied to the flaw. Confidence in the patch process is weakening as real-world attacks persist. 

Several admins report breaches on FortiGate units using FortiOS 7.4.9, along with systems updated to 7.4.10. While Fortinet claimed a fix arrived in December via version 7.4.9 - tied to CVE-2025-59718 - one user states internal confirmation showed the flaw persisted past that patch. Updates such as 7.4.11, 7.6.6, and 8.0.0 are said to be underway, aiming complete resolution. 

One case involved an administrator spotting a suspicious single sign-on attempt on a FortiGate system with FortiOS version 7.4.9. A security alert appeared after detection of a freshly added local admin profile, behavior seen before during prior attacks exploiting this flaw. Activity records indicated the new account emerged right after an SSO entry tied to the email cloud-init@mail.io. That access came from the IP 104.28.244.114, marking another point in the timeline. 

A few others using Fortinet noticed very similar incidents. Their firewall - running version 7.4.9 of FortiOS - logged an identical email and source IP during access attempts, followed by the addition of a privileged profile labeled “helpdesk.” Confirmation came afterward from Fortinet’s development group: the security flaw remained active even after update 7.4.10. 

Unexpectedly, the behavior aligns with earlier observations from Arctic Wolf, a cybersecurity company. In late 2025, they identified exploitation of vulnerability CVE-2025-59718 through manipulated SAML data. Instead of standard procedures, hackers leveraged flaws in FortiGate's FortiCloud login mechanism. Through this weakness, unauthorized users gained access to privileged administrator credentials. 

Nowhere in recent updates does Fortinet address the newest claims of system breaches, even after repeated outreach attempts. Without a complete fix available just yet, experts suggest pausing certain functions as a stopgap solution. Turning off the FortiCloud SSO capability stands out - especially when active - since attacks largely flow through that pathway. Earlier warnings from Fortinet pointed out that FortiCloud SSO stays inactive unless tied to a FortiCare registration - this setup naturally reduces exposure. 

Despite that, findings shared by Shadowserver in mid-December revealed over 25,000 such devices already running the feature publicly. Though efforts have protected most of them, around 11,000 still appear accessible across the web. Their security status remains uncertain. 

Faced with unpatched FortiOS versions, admins might consider revising login configurations while Fortinet works on fixes. Some could turn off unused single sign-on options as a precaution. Watching system records carefully may help spot odd behavior tied to admin access during this period.

Kimwolf Botnet Hijacks 1.8M Android Devices for DDoS Chaos

 

The Kimwolf botnet is one of the largest recently found Android-based threats, contaminating over 1.8 million devices mostly Android TV boxes and IoT devices globally. Named after its reliance on the wolfSSL library, this malware appeared in late October 2025 when XLab researchers noticed a suspicious C2 domain rising to the top, surpassing Google on Cloudflare charts. Operators evolved the botnet from the Aisuru family, enhancing evasion tactics to build a massive proxy and DDoS army. 

Kimwolf propagates through residential proxy services, taking advantage of misconfigured services like PYPROXY to access on home networks and attack devices with open Android Debug Bridge (ADB) ports. Once executed, it drops payloads such as the ByteConnect SDK via pre-packaged malicious apps or direct downloads, which converts victims into proxy nodes that can be rented on underground markets. The malware has 13 DDoS techniques under UDP, TCP, and ICMP while 96.5% of commands are related to traffic proxying for ad fraud, scraping, and account takeovers.

Capabilities extend to reverse shells for remote control, file management, and lateral movement within networks by altering DNS settings. To dodge takedowns, it employs DNS over TLS (DoT), elliptic curve signatures for C2 authentication, and EtherHiding via Ethereum Name Service (ENS) blockchain domains. Between November 19-22, 2025, it issued 1.7 billion DDoS commands; researchers estimate its peak capacity at 30 Tbps, fueling attacks on U.S., Chinese, and European targets.

Infections span 222 countries, led by Brazil (14.63%), India (12.71%), and the U.S. (9.58%), hitting uncertified TV boxes that lack updates and Google protections. Black Lotus Labs null-routed over 550 C2 nodes since October 2025, slashing active bots from peaks of 1.83 million to 200,000, while linking it to proxy sales on Discord by Resi Rack affiliates. Operators retaliated with taunting DDoS floods referencing journalist Brian Krebs. 

Security teams urge focusing on smart TV vulnerabilities like firmware flaws and weak passwords, pushing for intelligence sharing to dismantle such botnets.Users should disable ADB, update firmware, avoid sideloading, and monitor networks for anomalies. As consumer IoT grows, Kimwolf underscores the risks of turning homes into cyber weapons, demanding vendor accountability and robust defenses.

Cybercriminals Target Cloud File-Sharing Services to Access Corporate Data

 



Cybersecurity analysts are raising concerns about a growing trend in which corporate cloud-based file-sharing platforms are being leveraged to extract sensitive organizational data. A cybercrime actor known online as “Zestix” has recently been observed advertising stolen corporate information that allegedly originates from enterprise deployments of widely used cloud file-sharing solutions.

Findings shared by cyber threat intelligence firm Hudson Rock suggest that the initial compromise may not stem from vulnerabilities in the platforms themselves, but rather from infected employee devices. In several cases examined by researchers, login credentials linked to corporate cloud accounts were traced back to information-stealing malware operating on users’ systems.

These malware strains are typically delivered through deceptive online tactics, including malicious advertising and fake system prompts designed to trick users into interacting with harmful content. Once active, such malware can silently harvest stored browser data, saved passwords, personal details, and financial information, creating long-term access risks.

When attackers obtain valid credentials and the associated cloud service account does not enforce multi-factor authentication, unauthorized access becomes significantly easier. Without this added layer of verification, threat actors can enter corporate environments using legitimate login details without immediately triggering security alarms.

Hudson Rock also reported that some of the compromised credentials identified during its investigation had been present in criminal repositories for extended periods. This suggests lapses in routine password management practices, such as timely credential rotation or session invalidation after suspected exposure.

Researchers describe Zestix as operating in the role of an initial access broker, meaning the actor focuses on selling entry points into corporate systems rather than directly exploiting them. The access being offered reportedly involves cloud file-sharing environments used across a range of industries, including transportation, healthcare, utilities, telecommunications, legal services, and public-sector operations.

To validate its findings, Hudson Rock analyzed malware-derived credential logs and correlated them with publicly accessible metadata and open-source intelligence. Through this process, the firm identified multiple instances where employee credentials associated with cloud file-sharing platforms appeared in confirmed malware records. However, the researchers emphasized that these findings do not constitute public confirmation of data breaches, as affected organizations have not formally disclosed incidents linked to the activity.

The data allegedly being marketed spans a wide spectrum of corporate and operational material, including technical documentation, internal business files, customer information, infrastructure layouts, and contractual records. Exposure of such data could lead to regulatory consequences, reputational harm, and increased risks related to privacy, security, and competitive intelligence.

Beyond the specific cases examined, researchers warn that this activity reflects a broader structural issue. Threat intelligence data indicates that credential-stealing infections remain widespread across corporate environments, reinforcing the need for stronger endpoint security, consistent use of multi-factor authentication, and proactive credential hygiene.

Hudson Rock stated that relevant cloud service providers have been informed of the verified exposures to enable appropriate mitigation measures.

Ledger Customer Data Exposed After Global-e Payment Processor Cloud Incident

 

A fresh leak of customer details emerged, linked not to Ledger’s systems but to Global-e - an outside firm handling payments for Ledger.com. News broke when affected users received an alert email from Global-e. That message later appeared online, posted by ZachXBT, a known blockchain tracker using a fake name, via the platform X. 

Unexpectedly, a breach exposed some customer records belonging to Ledger, hosted within Global-e’s online storage system. Personal details, including names and email addresses made up the compromised data, one report confirmed. What remains unclear is the number of people impacted by this event. At no point has Global-e shared specifics about when the intrusion took place.  

Unexpected behavior triggered alerts at Global-e, prompting immediate steps to secure systems while probes began. Investigation followed swiftly after safeguards were applied, verifying unauthorized entry had occurred. Outside experts joined later to examine how the breach unfolded and assess potential data exposure. Findings showed certain personal details - names among them - were viewed without permission. Contact records also appeared in the set of compromised material. What emerged from analysis pointed clearly to limited but sensitive information being reached. 

Following an event involving customer data, Ledger confirmed details in a statement provided to CoinDesk. The issue originated not in Ledger's infrastructure but inside Global-e’s operational environment. Because Global-e functions as the Merchant of Record for certain transactions, it holds responsibility for managing related personal data. That role explains why Global-e sent alerts directly to impacted individuals. Information exposed includes records tied to purchases made on Ledger.com when buyers used Global-e’s payment handling system. 

While limited to specific order-related fields, access was unauthorized and stemmed from weaknesses at Global-e. Though separate entities, their integration during checkout links them in how transactional information flows. Customers involved completed orders between defined dates under these service conditions. Security updates followed after discovery, coordinated across both organizations. Notification timing depended on forensic review completion by third-party experts. Each step aimed at clarity without premature disclosure before full analysis. 

Still, the firm pointed out its own infrastructure - platform, hardware, software - was untouched by the incident. Security around those systems remains intact, according to their statement. What's more, since users keep control of their wallets directly, third parties like Global-e cannot reach seed phrases or asset details. Access to such private keys never existed for external entities. Payment records, meanwhile, stayed outside the scope of what appeared in the leak. 

Few details emerged at first, yet Ledger confirmed working alongside Global-e to deliver clear information to those involved. That setup used by several retailers turned out to be vulnerable, pointing beyond a single company. Updates began flowing after detection, though the impact spread wider than expected across shared infrastructure. 

Coming to light now, this revelation follows earlier security problems connected to Ledger. Back in 2020, a flaw at Shopify - the online store platform they used - led to a leak affecting 270,000 customers’ details. Then, in 2023, another event hit, causing financial damage close to half a million dollars and touching multiple DeFi platforms. Though different in both scale and source, the newest issue highlights how reliance on outside vendors can still pose serious threats when handling purchases and private user information.  

Still, Ledger’s online platforms showed no signs of a live breach on their end, yet warnings about vigilance persist. Though nothing points to internal failures, alerts remind customers to stay alert regardless. Even now, with silence across official posts, guidance leans toward caution just the same.

ESA Confirms Cyber Breach After Hacker Claims 200GB Data Theft

 

The European Space Agency (ESA) has confirmed a major cybersecurity incident in the external servers used for scientific cooperation. The hackers who carried out the operation claim responsibility for the breach in a post in the hacking community site BreachForums and claim that over 200 GB worth of data has been stolen, including source code, API tokens, and credentials. This incident highlights escalating cyber threats to space infrastructure amid growing interconnectedness in the sector 

It is alleged that the incident occurred around December 18, 2025, with an actor using the pseudonym "888" allegedly gaining access to ESA's JIRA and Bitbucket systems for an approximate week's duration. ESA claims that the compromised systems represented a "very small number" of systems not on their main network, which only included unclassified data meant for engineering partnerships. As a result, the agency conducted an investigation, secured the compromised systems, and notified stakeholders, while claiming that no mission critical systems were compromised. 

The leaked data includes CI/CD pipelines, Terraform files, SQL files, configurations, and hardcoded credentials, which have sparked supply chain security concerns. As for the leaked data, it includes screenshots from the breach, which show unauthorized access to private repositories. However, it is unclear whether this data is genuine or not. It is also unclear whether the leaked data is classified or not. As for security experts, it is believed that this data can be used for lateral movements by highly sophisticated attackers, even if it is unclassified. 

Adding to the trouble, the Lapsus$ group said they carried out a separate breach in September 2025, disclosing they exfiltrated 500 GB of data containing sensitive files on spacecraft operations, mission specifics, and contractor information involving partners such as SpaceX and Airbus. The ESA opened a criminal investigation, working with the authorities, however the immediate effects were minimized. The agency has been hit by a string of incidents since 2011, including skimmers placed on merchandise site readers. 

The series of breaches may be indicative of the "loosely coupled" regional space cooperative environment featuring among the ESA 23 member states. Space cybersecurity requirements are rising—as evidenced by open solicitations for security products—incidents like this may foster distrust of global partnerships. Investigations continue on what will be the long-term threats, but there is a pressing need for stronger protection.

Sedgwick Confirms Cyberattack on Government Services Unit After TridentLocker Data Theft Claim

 

Sedgwick Claims Management Services Inc. has disclosed that a cyber incident affected one of its subsidiaries in late December, following claims by the TridentLocker ransomware group that it had exfiltrated sensitive company data.

The breach took place on Dec. 30 and involved Sedgwick Government Solutions Inc., a unit that delivers technology-driven claims and risk administration services to U.S. federal agencies.

In response, Sedgwick implemented standard incident containment measures, including isolating impacted systems, engaging external cybersecurity specialists to conduct forensic investigations, and notifying law enforcement authorities and relevant stakeholders.

According to the company, early findings suggest the intrusion was confined to a standalone file transfer system used by the subsidiary. Sedgwick emphasized that there is currently no indication that its primary corporate network or core claims management platforms were compromised.

Sedgwick Government Solutions works closely with several U.S. federal bodies, including the Department of Homeland Security and the Cybersecurity and Infrastructure Security Agency. As the investigation progresses, Sedgwick has begun alerting individuals and organizations that may have been affected—a process expected to continue for several weeks as forensic analysis advances.

The company’s confirmation follows assertions from the TridentLocker ransomware group, which claims to have obtained roughly 3.4 gigabytes of data and has threatened to release the information publicly if its demands are not satisfied.

TridentLocker operates using a data extortion strategy that prioritizes stealing and leaking data instead of encrypting victims’ systems.

“TridentLocker hitting a federal contractor serving DHS, ICE, CBP and CISA on New Year’s Eve is a statement,” Michael Bell, founder and chief executive of cybersecurity solutions provider Suzu Labs, told SiliconANGLE via email. “This group only emerged in November and they’re already going after companies that handle sensitive government claims and risk management data. Federal contractors remain high-value targets because attackers know these companies often have less mature security programs than the agencies they serve.”

Bell further noted that Sedgwick’s emphasis on network segmentation is reassuring but cautioned against minimizing the impact. He added that Sedgwick’s response about network segmentation “is what you want to hear, but 3.4 gigabytes from a file transfer system is still meaningful. These systems are designed to move documents between contractors and the agencies they serve and the investigation will determine what was actually in those files.”

Targeted Cyberattack Foiled by Resecurity Honeypot


 

There has been a targeted intrusion attempt against the internal environment of Resecurity in November 2025, which has been revealed in detail by the cyber security company. In order to expose the adversaries behind this attack, the company deliberately turned the attack into a counterintelligence operation by using advanced deception techniques.

In response to a threat actor using a low-privilege employee account in order to gain access to an enterprise network, Resecurity’s incident response team redirected the intrusion into a controlled synthetic data honeypot that resembles a realistic enterprise network within which the intrusion could be detected. 

A real-time analysis of the attackers’ infrastructure, as well as their tradecraft, was not only possible with this move, but it also triggered the involvement of law enforcement after a number of evidences linked the activity to an Egyptian-based threat actor and infrastructure associated with the ShinyHunter cybercrime group, which has subsequently been shown to have claimed responsibility for the data breach falsely. 

Resecurity demonstrated how modern deception platforms, with the help of synthetic datasets generated by artificial intelligence, combined with carefully curated artifacts gathered from previously leaked dark web material, can transform reconnaissance attempts by financially motivated cybercriminals into actionable intelligence.

The active defense strategies are becoming increasingly important in today's cybersecurity operations as they do not expose customer or proprietary data.

The Resecurity team reported that threat actors operating under the nickname "Scattered Lapsus$ Hunters" publicly claimed on Telegram that they had accessed the company's systems and stolen sensitive information, such as employee information, internal communications, threat intelligence reports, client data, and more. This claim has been strongly denied by the firm. 

In addition to the screenshots shared by the group, it was later confirmed that they came from a honeypot environment that had been built specifically for Resecurity instead of Resecurity's production infrastructure. 

On the 21st of November 2025, the company's digital forensics and incident response team observed suspicious probes of publicly available services, as well as targeted attempts to access a restricted employee account. This activity was detected by the company's digital forensics and incident response team. 

There were initial traces of reconnaissance traffic to Egyptian IP addresses, such as 156.193.212.244 and 102.41.112.148. As a result of the use of commercial VPN services, Resecurity shifted from containment to observation, rather than blocking the intrusion.

Defenders created a carefully staged honeytrap account filled with synthetic data in order to observe the attackers' tactics, techniques, and procedures, rather than blocking the intrusion. 

A total of 28,000 fake consumer profiles were created in the decoy environment, along with nearly 190,000 mock payment transactions generated from publicly available patterns that contained fake Stripe records as well as fake email addresses that were derived from credential “combo lists.” 

In order to further enhance the authenticity of the data, Resecurity reactivated a retired Mattermost collaboration platform, and seeded it with outdated 2023 logs, thereby convincing the attackers that the system was indeed genuine. 

There were approximately 188,000 automated requests routed through residential proxy networks in an attempt by the attackers to harvest the synthetic dataset between December 12 and December 24. This effort ultimately failed when repeated connection failures revealed operational security shortcomings and revealed some of the attackers' real infrastructure in the process of repeated connection failures exposing vulnerabilities in the security of the system. 

A recent press release issued by Resecurity denies the breach allegation, stating that the systems cited by the threat actors were never part of its production environment, but were rather deliberately exposed honeypot assets designed to attract and observe malicious activity from a distance.

After receiving external inquiries, the company’s digital forensics and incident response teams first detected reconnaissance activity on November 21, 2025, after a threat actor began probing publicly accessible services on November 20, 2025, in a report published on December 24 and shared with reporters. 

Telemetry gathered early in the investigation revealed a number of indications that the network had been compromised, including connections coming from Egyptian IP addresses, as well as traffic being routed through Mullvas VPN infrastructure. 

A controlled honeypot account has been deployed by Resecurity inside an isolated environment as a response to the attack instead of a move to containment immediately. As a result, the attacker was able to authenticate to and interact with systems populated completely with false employee, customer, and payment information while their actions were closely monitored by Resecurity. 

Specifically, the synthetic datasets were designed to replicate the actual enterprise data structures, including over 190,000 fictitious consumer profiles and over 28,000 dummy payment transactions that were formatted to adhere to Stripe's official API specifications, as defined in the Stripe API documentation. 

In the early months of the operation, the attacker used residential proxy networks extensively to generate more than 188,000 requests for data exfiltration, which occurred between December 12 and December 24 as an automated data exfiltration operation. 

During this period, Resecurity collected detailed telemetry on the adversary's tactics, techniques, and supporting infrastructure, resulting in several operational security failures that were caused by proxy disruptions that briefly exposed confirmed IP addresses, which led to multiple operational security failures. 

As the deception continued, investigators introduced additional synthetic datasets, which led to even more mistakes that narrowed the attribution and helped determine the servers that orchestrated the activity, leading to an increase in errors. 

In the aftermath of sharing the intelligence with law enforcement partners, a foreign agency collaborating with Resecurity issued a subpoena request, which resulted in Resecurity receiving a subpoena. 

Following this initial breach, the attackers continued to make claims on Telegram, and their data was also shared with third-party breach analysts, but these statements, along with the new claims, were found to lack any verifiable evidence of actual compromise of real client systems. Independent review found that no evidence of the breach existed. 

Upon further examination, it was determined that the Telegram channel used to distribute these claims had been suspended, as did follow-on assertions from the ShinyHunters group, which were also determined to be derived from a honeytrap environment.

The actors, unknowingly, gained access to a decoy account and infrastructure, which was enough to confirm their fall into the honeytrap. Nevertheless, the incident demonstrates both the growing sophistication of modern deception technology as well as the importance of embedding them within a broader, more resilient security framework in order to maximize their effectiveness. 

A honeypot and synthetic data environment can be a valuable tool for observing attacker behavior. However, security leaders emphasize that the most effective way to use these tools is to combine them with strong foundational controls, including continuous vulnerability management, zero trust access models, multifactor authentication, employee awareness training, and disciplined network segmentation. 

Resecurity represents an evolution in defensive strategy from a reactive and reactionary model to one where organizations are taking a proactive approach in the fight against cyberthreats by gathering intelligence, disrupting the operations of adversaries, and reducing real-world risk in the process. 

There is no doubt that the ability to observe, mislead, and anticipate hostile activity, before meaningful damage occurs, is becoming an increasingly important element of enterprise defenses in the age of cyber threats as they continue to evolve at an incredible rate.

Together, the episodes present a rare, transparent view of how modern cyber attacks unfold-and how they can be strategically neutralized in order to avoid escalation of risk to data and real systems. 

Ultimately, Resecurity's claims serve more as an illustration of how threat actors are increasingly relying on perception, publicity, and speed to shape narratives before facts are even known to have been uncovered, than they serve as evidence that a successful breach occurred. 

Defenders of the case should take this lesson to heart: visibility and control can play a key role in preventing a crisis. It has become increasingly important for organizations to be able to verify, contextualize, and counter the false claims that are made by their adversaries as they implement technical capabilities combined with psychological tactics in an attempt to breach their systems. 

The Resecurity incident exemplifies how disciplined preparation and intelligence-led defense can help turn an attempted compromise into strategic advantage in an environment where trust and reputation are often the first targets. They do this quiet, methodically, and without revealing what really matters when a compromise occurs.