- statement
As an international media development organisation dedicated to strengthening independent and public-interest media and committed to building safe, inclusive and trustworthy online spaces, we are deeply concerned by the misuse of artificial intelligence to create and circulate explicit imagery. As reflected in recent reports about AI-generated deepfake images shared online of a Dutch woman member of the Parliament, such harmful content inflicts personal distress and undermines public confidence in online information ecosystems. More importantly, it shrinks the civic space for women and marginalised groups even further and has a chilling effect on their right and ability to participate freely and without fear in the digital public square. This results in limiting their freedom of expression, impacting their career and professional prospects and growth, and exacts a psychological toll on their mental and emotional well-being.
Grok AI generated about 3m sexualised images in less than two weeks, including 23,000 that appear to depict children – Robert Booth, The Guardian
These challenges come in a broader picture in which the Dutch media has called for enhancement of information integrity and protection of independent media. In late 2025, Dutch media organisations jointly urged government officials to prioritise media policy in the upcoming coalition talks to safeguard the reliability and safety of the information landscape, an appeal that is made even more urgent by incidents like this one.
Moreover, the misuse of AI tools to generate non-consensual intimate imagery is not confined to the Netherlands; it has become a global issue. Across multiple countries, deep fake and non-consensual intimate imagery is used as a tool to manipulate and extort vulnerable groups. Women and minors are often the prime target, as we have seen in multiple investigations in the United Kingdom, the United States and Canada, amongst others, illustrated most recently in the case of Grok used for generating nudified images of women and minors.
These incidents highlight the urgent need for a better regulation of AI tools and preserving integrity and reliability of digital information ecosystem. We believe that governments, online platform providers and AI developers alike are responsible for offering better accountability measure. As an actor in the civil society, we assume our role in advocating for policy improvements, but also for more attention to the importance of digital media and tech literacy, so victims of such threats are better aware of their rights and are better placed in utilising different legal channels to protect them
| We therefore call on: – Policymakers and regulators to accelerate the development and enforcement of frameworks that protect individuals from AI-facilitated harms. – Tech companies to implement robust safeguards in the generative AI tools against the creation and dissemination of harmful AI-generated images and content, including deepfakes. – Media and civil society actors to intensify efforts in public education and digital media and AI literacy. |
For further context on the rise of AI-facilitated harm and the Dutch media’s response, see the following resources:
We use cookies to enhance your browsing experience and analyze our traffic. By clicking “Accept & Close”, you consent to our use of cookies. Please see Cookie Policy