This study explores how young people interact with, perceive, and critically respond to AI-generated content (AIGC), particularly amid growing challenges in distinguishing between human and machine-generated content. The rapid dissemination of AIGC – often lacking transparency and accountability – along with image manipulation, and false narratives, raises serious concerns for digital discourse and information integrity, contributing to rising distrust of the digital ecosystem.
We employ a mixed-methods approach, focusing on young people (aged 18-35) from diverse nationalities and specifically examined Global South regions including Benin, Iraq, Morocco, Nepal, Nigeria, and Uganda. The research combined focus groups across diverse geographical locations, survey data, and social listening analysis of online conversations related to AIGC. Findings reveal that while participants actively engage with AIGC, they also hold various apprehensions relating to misinformation, bias, authenticity concerns, and data privacy. Importantly, this research hopes to draw attention to youth expectations for the future of AIGC and calls for a multi-layered approach. This includes technical safeguards, media and information literacy initiatives, and platform accountability to promote responsible AIGC usage to sustain user trust in digital media ecosystems.