Digital warfare targeting the GPS location After the U.S and Israel’s “pre-emptive” strikes against Iran last month, research firm Kpler fou...
A court in the Netherlands has taken strict action against the platform X and its artificial intelligence system Grok, directing both to stop enabling the creation of sexually explicit images generated without consent, as well as any material involving minors. The ruling carries a financial penalty of €100,000 per day for each entity if they fail to follow the court’s instructions.
This decision, delivered by the Amsterdam District Court, marks a pivotal legal development. It is the first time in Europe that a judge has formally imposed restrictions on an AI-powered image generation tool over the production of abusive or non-consensual sexual content.
The legal complaint was filed by Offlimits together with Fonds Slachtofferhulp. Both groups argued that the pace of regulatory enforcement had not kept up with the speed at which harm was being caused. Existing Dutch legislation already makes it illegal to create or share manipulated nude images of individuals without their permission. However, concerns intensified after Grok introduced an image-editing capability toward the end of December 2025, which led to a sharp increase in reported incidents. On February 4, 2026, Offlimits formally contacted xAI and X, demanding that the feature be withdrawn.
In its ruling, the court instructed xAI to immediately halt the production and distribution of sexualized images involving individuals living in the Netherlands unless clear consent has been obtained. It also ordered the company to stop generating or displaying any content that falls under the legal definition of child sexual abuse material. Alongside this, X Corp and X Internet Unlimited Company have been required to suspend Grok’s functionality on the platform for as long as these violations continue.
Legal representatives for Offlimits emphasized that the so-called “undressing” feature cannot remain active anywhere in the world, not just within Dutch borders. The court further instructed xAI to submit written confirmation explaining the steps taken to comply. If this confirmation is not provided, the daily financial penalty will continue to apply.
Doubts Over Safeguards
A central question for the court was whether the companies had actually made it impossible for such content to be created, as they claimed. The judges concluded that this had not been convincingly demonstrated.
During a hearing on March 12, lawyers representing xAI argued that strong safeguards had been implemented starting January 20, 2026. They maintained that Grok no longer allowed the generation of non-consensual intimate imagery or content involving minors.
However, evidence presented by Offlimits challenged that claim. On March 9, the same day the companies denied any remaining risk, it was still possible to produce a sexualized video of a real person using only a single uploaded image. The system did not require any confirmation of consent. The court viewed this as a contradiction that cast doubt on the effectiveness of the safeguards.
The judges also pointed out inconsistency in xAI’s position regarding child sexual abuse material. The company argued both that such content could not be generated and that it was not technically possible to guarantee complete prevention.
Legal Responsibility and Framework
The court determined that creating non-consensual “undressing” images amounts to a violation of the General Data Protection Regulation. It also found that enabling the production of child sexual abuse material constitutes unlawful behavior under Dutch civil law.
Importantly, the court rejected the argument that responsibility should fall solely on users who input prompts. Instead, it concluded that the platform itself, which controls how the system functions, must take responsibility for preventing misuse.
This reasoning aligns with the Russmedia judgment issued by the Court of Justice of the European Union. That earlier ruling established that platforms can be treated as joint controllers of personal data and cannot rely on intermediary protections to avoid obligations under European data protection law. Applying this principle, the Dutch court found that xAI and X’s European entity are responsible for how personal data is processed within Grok’s image generation system.
The court went a step further by highlighting a key distinction. Unlike platforms that merely host user-generated content, Grok actively creates the material itself. Because xAI designed and operates the system, it was identified as the party responsible for preventing unlawful outputs, regardless of who initiates the request.
Jurisdictional Limits
The ruling applies differently across entities. X Corp, which is based in the United States, faces narrower restrictions because it does not directly provide services within the Netherlands. Its obligation is limited to suspending Grok’s functionality in relation to non-consensual imagery.
By contrast, X Internet Unlimited Company, which serves users within the European Union, must comply with both the ban on non-consensual sexualized content and the restrictions related to child abuse material.
Increasing Global Scrutiny
The case follows findings from the Center for Countering Digital Hate, which estimated that Grok generated around 3 million sexualized images within a ten-day period between late December 2025 and early January 2026. Approximately 23,000 of those images appeared to involve minors.
Regulatory pressure is also building internationally. Ireland’s Data Protection Commission has launched an investigation under GDPR rules, while the European Commission has opened proceedings under the Digital Services Act. In the United Kingdom, Ofcom has initiated action under its Online Safety framework. In the United States, legal challenges have also emerged, including lawsuits filed by teenagers in Tennessee and by the city of Baltimore.
At the policy level, the European Parliament has supported efforts to strengthen the AI Act by introducing an explicit ban on tools designed to digitally remove clothing from images.
A Turning Point for AI Accountability
Authorities are revising how they approach artificial intelligence systems. Earlier debates often treated platforms as passive intermediaries. However, systems like Grok actively generate content, which changes the question of responsibility.
The decision makes it clear that companies developing such technologies are expected to take active steps to prevent harm. Claims about technical limitations are unlikely to be accepted if evidence shows that misuse remains possible.
X and xAI have been given ten working days to provide written confirmation explaining how they have complied with the court’s order.
A China-based hacker is targeting European government and diplomatic entities; the attack started in mid-2025, after a two-year period of no targeting in the region. The campaign has been linked to TA416; the activities coincide with DarkPeony, Red Lich, RedDelta, SmugX, Vertigo Panda, and UNC6384.
According to Proofpoint, “This TA416 activity included multiple waves of web bug and malware delivery campaigns against diplomatic missions to the European Union and NATO across a range of European countries. Throughout this period, TA416 regularly altered its infection chain, including abusing Cloudflare Turnstile challenge pages, abusing OAuth redirects, and using C# project files, as well as frequently updating its custom PlugX payload."
Additionally, TA416 organized multiple campaigns against the government and diplomatic organizations in the Middle East after the US-Iran conflict in February 2026. The attack aimed to gather regional intelligence regarding the conflict.
TA416 also has a history of technical overlaps with a different group, Mustang Panda (UNK_SteadySplit, CerenaKeeper, and Red Ishtar). The two gangs are listed as Hive0154, Twill Typhoon, Earth Preta, Temp.HEX, Stately Taurus, and HoneyMyte.
TA416’s attacks use PlugX variants. The Mustang Panda group continually installed tools like COOLCLIENT, TONESHELL, and PUBLOAD. One common thing is using DLL side-loading to install malware.
TA416’s latest campaigns against European entities are pushing a mix of web bug and malware deployment operations, while threat actors use freemail sender accounts to do spying and install the PlugX backdoor through harmful archives via Google Drive, Microsoft Azure Blob Storage, and exploited SharePoint incidents. The PlugX malware campaigns were recently found by Arctic Wolf and StrikeReady in October 2025.
According to Proofpoint, “A web bug (or tracking pixel) is a tiny invisible object embedded in an email that triggers an HTTP request to a remote server when opened, revealing the recipient's IP address, user agent, and time of access, allowing the threat actor to assess whether the email was opened by the intended target.”
The TA416 attacks in December last year leveraged third-party Microsoft Entra ID cloud apps to start redirecting to the download of harmful archives. Phishing emails in this campaign link to Microsoft’s authentic OAuth authorization. Once opened, resends the user to the hacker-controlled domain and installs PlugX.
According to experts, "When the MSBuild executable is run, it searches the current directory for a project file and automatically builds it."