Embracing the future of journalism 

Share this item

STORY

In the series, Digital Media Shakers, we highlight bold voices shaping public interest and independent media. These are individuals whose passion goes beyond the medium – they are working to create lasting change in the (digital) media industry.

Using tech and AI tools isn’t exactly new for media professionals especially not for journalists. Newsrooms use AI to streamline their workflows: sorting data, boosting research capabilities and even writing basic reports. However, with the rapid evolution of AI, thinking about how AI is applied is crucial.  

Meet Laurens

Laurens Vreekamp is always thinking one step. As a trainer and design thinker, he works with a wide range of media organisations to explore how AI – from machine learning to language models – can be applied thoughtfully in journalism. For Laurens, AI isn’t inherently good nor bad, both in principle and in application. (Though, he considers some parts of AI system development problematic – more on that later!) It’s about intent and impact. 

He recently led an RNW Media Brown Bag lunch for our staff and Vine community members. We got to chat afterward about his interests, his book tailored for media professionals, his thoughts on AI and important values to carry when applying it to your work. Check out our conversation! 

What is Ethical AI?

RNW Media: Hey, Laurens! What do you think about when you hear, “ethical use of technology & AI”? 

Laurens: It means you’ve balanced the use of AI tools in a way that’s fair and doesn’t perpetuate stereotypes or biases embedded in training data or the outputs of generative AI. It’s also about compensation and consent – ensuring that artists, writers, illustrators and musicians are credited and asked for permission if their work is used. 

There’s also the energy it takes to power the machines and the hidden labour – the ghost work – done in low-income countries, often in the Global Majority. These people fine-tune and train chatbots and large language models. Recently, there’s also been a geopolitical dimension with debates in Europe around using Chinese and/or American technology. It’s all part of the ethical complications around AI. 

Then it goes philosophical! There’s authorship and agency – who’s producing the content? Is it the machine or the human? And if you’re being paid to produce original content, to what extent can you rely on machines? And while fairness and responsibility are often approached through a technical lens – something fixable via software or statistics – it also reflects societal inequalities.  

All of this should be considered when discussing the ethical use of AI. 

RNW Media: And how are journalists using AI in their daily work? 

Laurens: AI has long been used in journalism – from early tools like translation, summarisation, clustering information and entity recognition to more recent applications like sentiment analysis, SEO, headline suggestions, and metadata generation.  

A current trend is “vibe coding”: using AI to help teams prototype apps or newsletters by prompting a version of a product and sparking collaborative discussions. It’s not about replacing developers but about supporting idea generation and briefs. 

“Tech can open new creative possibilities. If you understand what it can do, you can find new ways to work […]”

RNW Media: As you can imagine, there are lots of hesitancies around using AI tools – even for us. So, what are some pitfalls and opportunities AI presents for journalists and media makers? 

Laurens: I mentioned the example of the New York Times’ guidelines during our Brown Bag session. The Times came up with AI guidelines by experiment at first. Many newsrooms start this way and then hold internal discussions around what’s acceptable and what crosses the line. These decisions are human – they’re not binary. 

For example, the Times says it’s OK to use a chatbot to help generate interview questions for a CEO. But the key principle is there must always be a human in the loop. A machine can help, but a human ultimately decides. 

It should be a tool for validation and inspiration, not something that creates a publishable story on its own. Most newsrooms I’ve worked with or visited agree with that. 

Sharing Industry Insights

RNW Media: Right! You offer great industry insights in your book, The Art of AI (only available in Dutch), which is a practical guide on machine learning for media makers. What inspired you to write the book?  

Laurens: When you talk to machine learning engineers, they light up about the potential of this technology. But public discourse often focuses on efficiency and output, which I think is a dead end. It turns us into machines. 

Instead, AI should help us become more human. Tech can open new creative possibilities. If you understand what it can do, you can find new ways to work, not just do more of the same. 

RNW Media: Is there an example you can share from the book? 

Laurens: Yes, for instance, an example of this is NRC‘s, a Dutch newspaper, “Rewild” project. They asked readers to leave one square meter of garden untouched for a year, then document what happened. Over 8,500 people joined the action. They knew they could manage that scale thanks to technology, which enabled them to analyse the data and engage audiences meaningfully. 

There are a ton of great examples in the book where AI didn’t replace human work – it enabled it. And interestingly, they both used discriminative AI, not generative AI.

Bringing Ethical AI to the Forefront

RNW Media: Love that example! To wrap it up, what are three takeaways you would share for journalists and media makers who want to use AI safely and transparently? 

  1. AI is not magic. Once you look under the hood and understand how it works, you see it’s just software – not consciousness or true intelligence. 
  1. The prep work matters. You need to put in a lot of human effort to get something useful from AI.  
  1. Have conversations. It’s important to reach consensus about your goals, outcomes, and how you measure quality when using AI tools. 

Anyone can prompt a tool. But meaningful prompting comes from human understanding. The generative app is just a conversation starter, not the final product. 

 Stay tuned for the next Digital Media Shaker in our series! 

We value your privacy

We use cookies to enhance your browsing experience and analyze our traffic. By clicking “Accept & Close”, you consent to our use of cookies. Please see Cookie Policy