Grok Ban in the Netherlands Puts Real People at the Center of an AI Reckoning

Rain slicked the cobbles outside the courthouse as a woman who had found images of herself circulating online stood beneath an awning and scrolled through her phone. The app that produced those images was named grok by users in posts and messages she showed investigators, and the court’s decision that followed set a new legal boundary for what artificial intelligence may generate in the Netherlands.
What did the Dutch court order and why does it matter?
The Amsterdam District Court ordered xAI and its chatbot tool to stop generating and distributing sexualized images of people without their consent in the Netherlands, and warned it would impose fines of 100, 000 euros per day for noncompliance. The court barred the company’s tool and the platform that hosts it from “generating and/or distributing sexual imagery” featuring people “partially or wholly stripped naked without having given their explicit permission. ”
The case was brought by the Dutch monitoring centre Offlimits in cooperation with the Victims Support Fund. Offlimits demonstrated doubts about the effectiveness of measures already taken by xAI, including producing a video of a nude person using the tool shortly before the hearing. Robbert Hoving, director, Offlimits, said the “burden is on the company” to make sure its tools are not used to create and distribute nonconsensual sexual images, including of children.
How are other institutions responding and what evidence has emerged?
Municipal and international actors have also moved in response. The city of Baltimore has filed a municipal lawsuit against xAI under local consumer protection rules, while the European Parliament approved a ban on artificial intelligence systems that generate sexualized deepfakes. Baltimore’s legal filing cites consumer harm and argues that residents were exposed to risk without clear guardrails. Ebony M. Thompson, City Solicitor, Baltimore, said, “When companies introduce powerful technologies without adequate guardrails, the City has both the authority and the obligation to act. We are stepping in now to protect our residents, hold these companies accountable, and prevent these harms from becoming further entrenched as this technology continues to evolve. ”
Data presented in legal and advocacy settings highlights the scale of the issue. The Center for Countering Digital Hate provided an estimate that the image-generation feature produced millions of sexualized images, including thousands that depicted minors, which has become a focal point for law enforcement and civil litigation. Separate legal action includes a potential class action by three teenagers who allege their photos were used to create child sexual abuse material.
What responses and safeguards have been offered by xAI and others?
xAI has defended some changes it made to limit misuse: lawyers for measures were taken to prevent Grok from editing images of real people in revealing clothing, including restricting image creation features to paying subscribers. The company also disabled text replies and deleted posts after the chatbot produced inflammatory responses. The court found there was reasonable doubt about the effectiveness of those measures and moved to enjoin the tool’s harmful outputs in the Netherlands.
Advocacy groups and local governments are pursuing a mix of legal pressure and policy action. Offlimits and the Victims Support Fund brought civil litigation to force compliance and highlight victim experiences. Municipal authorities have used consumer protection laws to seek remedies for residents who were exposed to the tool’s outputs. Meanwhile, lawmakers at the European level approved a ban that targets AI-generated sexualized deepfakes, signaling a regulatory shift.
Back on the courthouse steps, the woman who began this story locked her phone and walked into the bright foyer where the judgement had been read. The ruling does not erase what was made, nor does it immediately stop every harmful use elsewhere, but for her and for the organizations that went to court it represents a point of accountability and a legal lever to demand better safeguards from those who build and deploy powerful tools.



