AI for Good: But good for whom?  

Share this item

Why we need an audience-first approach to AI in public interest journalism 


Surabhi Srivastava and Ana Garza Ochoa 

As we attend the AI for Good Global Summit this week in Geneva from July 8-11, two questions have been playing on our mind – AI for good for whom, and who gets to decide? No, we aren’t trying to be facetious or cheeky here, and the summit, of course, is geared towards unpacking, understanding, and deliberating on how AI (in all its forms) can be utilised to solve and combat some of our most pressing and urgent global problems, be it the online disinformation crisis, climate change, access to basic healthcare, or even space exploration. 

However, it has also made us reflect on who bears the responsibility and accountability to make critical decisions on AI advancements given that these have the potential to influence the present and future of human civilisation. Moreover, questions regarding who gets to participate in the discussions and debates, who gets heard, who has a seat at the decision-making table are more relevant than ever, particularly as wealth and power get deeply intertwined with decisions around trajectory of AI development and deployment.   

As media development practitioners, we will illustrate these dilemmas using the case study of integrating AI for enabling and strengthening public interest information ecosystem – a public good that is essential for sustaining democratic discourse, trust in public institutions, and thriving open societies. This is an interesting case study in a global context where there has been an accelerated push to use AI in newsrooms, while it is worth nothing that a nuanced understanding about the perspectives of news consumers towards using AI in news remains a critical gap – both to decipher their enthusiasm about AI, as well as their apprehensions and concerns about it. This is, however, changing gradually with some new insights that aim to bridge this gap.   

According to the recently published Digital News Media Report 2025 by the Reuters Institute at the University of Oxford, while AI-powered chatbots and platforms are increasingly becoming a source of news, globally, particularly for the young consumers of news and media, there is also increased scepticism among news consumers about using AI in news, citing apprehensions about transparency, accuracy, and trustworthiness of AI-generated or AI-interfacing news content. This is at odds, then, with the urgency and push to integrate AI into news creation, production and publishing workflows across small and big newsrooms, without first pausing and reflecting on whether using AI in news is perceived, by audiences, as meeting their needs and expectations from media-makers and media organisations. This also raises the question – does use of AI in news actually serve the public good? And if it is the intended aim, why is the audience perspective the missing link? 

These trends are also corroborated by other similar studies that are looking at people’s attitudes towards AI in and beyond news more generally. Pew Research Center reported an increased in the proportion of U.S. adults who were more concerned than excited about generative AI tools like ChatGPT, whereby 52 percent of U.S. adults surveyed reported being more worried about AI after ChatGPT was released in late 2023, compared to 38 percent who had reported being concerned about gen-AI before ChatGPT was released. This trend has continued to prevail as AI-generated content and chatbots invade more of our everyday life and tasks. As writer Reece Rogers shared in his recent article in Wired, “This generalized animosity towards AI has not abated over time. Rather, it’s metastasized.” 

In our own research on assessing perceptions around AI-generated content (AIGC) among young people (ages 18-35)  across different countries (Benin, Iraq, Nepal, Nigeria, and Uganda, Morocco, Netherlands) we found that young audiences tend to trust text-based AIGC more than visual or audio formats, highlighting their expectation of these content being made with human supervision. However, trust quickly erodes with sensitive topics like politics or conflict related news, where synthetic content is seen as deceptive or unnecessary. Participants shared actively verifying content using visual cues, and in some cases, fact-checking tools. Their main concerns were around misinformation, content authenticity, bias representations, and data consent. 

These initial insights from their lived experiences ought to inform new AI applications, designs, and content creation within media organisations. This is not to say that AI has nothing meaningful to contribute towards enabling greater and easier access to news for diverse and marginalised audiences, for instance, in translating news stories into different languages or using an AI chatbot to delve deeper into a news story. Implementing AI while maintaining connection and trust with readers truly requires a multi-layered approach and cross-industry collaboration.  

We may want to ask, then, if a more specific, concrete, and user-informed application of AI to news and journalism accomplishes greater public good. The relation to this technology was expressed by the youth surveyed in our research with a mix of emotions, ranging from fascination to fear, and emphasizing their clear preference for AI to support—not replace—human creativity. Rather than a blanket push to integrate AI tools in an across the board fashion despite gradually increasing unease and discomfort among audiences with use of AI in news and everyday life, it is time that we navigate together the responsibility of listening to our audiences first, particularly the ones that are often left unheard and exist at the margins. 

We value your privacy

We use cookies to enhance your browsing experience and analyze our traffic. By clicking “Accept & Close”, you consent to our use of cookies. Please see Cookie Policy