Reclaiming Truth From Bots 

Share this item

Cover photo by Dr Bonnie O. Wong

STORY

RNW Media introduces our new series, Digital Media Shakers. We’re highlighting bold voices shaping public interest and independent media. These are individuals whose passion goes beyond the medium – they are working to create lasting change in the media industry. 

Have you ever questioned if a viral video or an online friend was real? These aren’t just silly questions anymore. With the rise of synthetic media – AI-generated content that can mimic reality – truth is becoming harder to pin down.  

Meet Claire

One of the experts keeping track of what is real is Claire Leibowicz. Claire is head of the AI and Media Integrity Programme at the Partnership on AI (PAI), a unique organization which brings experts together to create recommendations and resources to ensure AI benefits humanity.  

We caught up with PAI’s top advocate during the hustle and bustle of the International Journalism Festival in April. Check out our conversation! 

RNW Media: Let’s start with your work. What do you focus on at PAI? 

Claire: “I lead the AI and Media Integrity Programme. The programme focuses on building a healthy information environment. That includes questions like how we verify the authenticity of video content. It also means equipping our 120+ partners – from industry, civil society, media and academia – with the tools to work together on these challenges.” 

RNW Media: That’s a huge task! How does your team’s work reflect PAI’s global mission? 

Claire: “A huge part of our mission is including diverse perspectives in how technology is developed and deployed. When companies and technologists decide which tools to build or what techniques to use for algorithms and content generators, we advocate for decisions that protect civil liberties, support the news community and a broad spectrum of people. 
 
We work with stakeholders across regions and disciplines to help shape technology in socially empowering ways.”

Adapting to the Future of Tech

RNW Media: Synthetic media – like deepfakes – has become a major issue. What are you seeing and how is your team responding? 

Claire: “We’ve done extensive work on synthetic media – like deepfakes – by developing tools that help identify fake content in real-world scenarios. Our goal is to support newsrooms and ensure their content is recognized as authentic and making sure technology companies are creating systems that signal where content comes from. 

We believe bringing together a wide range of voices into meaningful collaboration is essential. It’s why we created guidance like our synthetic media framework, which is critical in this moment when technology touches every field and every part of human life.” 

RNW Media: When you think about the future, what technologies do you think will change how we interact online? 

Claire: “I’m fascinated by how we engage visually with storytelling (think photo or video) techniques on our phones. It’s how many people consume information today. 

But at the same time, we’ve also been paying attention to AI agents. These are tools and systems that don’t just wait for instructions but can act on your behalf. So, what happens when there’s an AI version of me – Claire – that can scan the web, write a research paper, choose a restaurant or even carry on conversations on a dating app? That’s where things are headed. 
 
And that shift raises many questions about trust, accountability and who is behind decisions that impact the content we see.” 

“I’d say your North Star remains: What does my audience understand and need? That’s always guided digital media.”

Shaping Ethics and Values Around AI

RNW Media: What’s needed to keep online spaces more trustworthy and transparent? 
 
Claire: “We need more transparency both in how technology companies build tools that let people generate and share content, and in how that content is labeled. It’s important to know where something came from, how it was made and how it’s being distributed. 

We also need better visibility into how these companies make decisions like who they are consulting, how they are being held accountable and how they can share data in a safe way. 

And it’s about balance. We want to encourage creativity and innovation without enabling harm.  There’s also the challenge of encouraging positive uses of AI without stifling innovation, while still being able to identify and prevent harmful outcomes. Clear boundaries and shared norms can help us get there.” 

RNW Media: How can journalists use AI creatively without losing their ethical grounding 

Claire: “Journalists already follow a gold standard when it comes to ethics – values like transparency and accuracy. And those same principles apply when evaluating the use of AI.  
 
For example, if you’re using deepfake technology to tell a story, you need to ask: Will my audience understand this? Should I label it to make that clear? 

It’s about carrying the ethos of journalism into this new space. 

So, I’d say your North Star remains: What does my audience understand and need? That’s always guided digital media. Staying anchored in that will serve you well in the age of AI. 

Claire reminds us that while these technologies may be new, core questions remain: What’s true? Who is this for? And how do we make sure the people behind the screen are still the ones being served?” 

 Stay tuned for the next Digital Media Shaker in our series! 

We value your privacy

We use cookies to enhance your browsing experience and analyze our traffic. By clicking “Accept & Close”, you consent to our use of cookies. Please see Cookie Policy