Guest Op-Ed: Giovana Fleck and Sana Naqvi
[Caution note: The following essay contains reflections about gender-based violence, especially online harassment.]
Forced to quit is a project that maps public figures who identify as female in the public sphere and were forced to leave their positions due to different forms of gender-based violence (GBV). They work as politicians, activists, journalists, and have endured gendered disinformation, hate speech, misogyny, and threats of all sorts. The database is collaborative – and growing. It maps cases at a global level, telling stories of silencing and violence.
Often, when we read about the compounding number of cases of technology-facilitated gender-based violence (TFGBV), we are struck by the numbers. The United Nations estimates that “approximately 38% of women have personally experienced online violence, while 85 per cent have witnessed it happening to other women.”
“It can just take one concerted attack for a woman to go silent”, said Professor Julie Posetti at the Copenhagen Conference on Information Integrity. Posetti co-authored a report that surveyed hundreds of women about the chilling escalation of online harms based on gender. Her remark echoes as we need to consider that each instance of violence adds restrictions in our ecosystems of information. For each woman that is silenced, forced to quit, there’s an information vacuum created, easily filled with misinformation, promoting gender stereotypes, and weakening information integrity and pluralism of voices in the digital media ecosystem. These attacks originate in most cases from ruling regimes and officials at the highest levels of government, religious leaders and institutions, political parties, and conservative individuals, all working towards the goal of invalidating and silencing women and gender minorities.
With the rise of authoritarianism and the shrinking of (digital) civic spaces globally, more than ever, we need pluralism, we need diversity of voices, and we need to challenge patriarchal narratives, that work to push out women and gender diverse people from online conversations. Control is exercised not only in curtailing freedom of expression, but also in creating barriers for access to information. Media makers, in particular those who work on Sexual and Reproductive Health and Rights (SRHR) in restrictive contexts, have been targeted, silenced, and experience their content branded by conservative audiences as misaligned with religious and cultural values. As a result, the public loses trusted voices on issues that directly affect their lives and bodily autonomy. What we see are ecosystems powered by exclusion and unsafe design that propagate and amplify authoritarianism, violence, and misogyny – ecosystems that not only allow TFGBV to be present, but become embedded in the business model.
Adding to this, anti-rights actors, bolstered by systematic and substantial funding, are able to reach millions, leveraged by the algorithmically engineered uneven playing field, while civil society and public interest media struggle with online visibility and engagement, thereby also impacting their ability to survive as media outlets and organisations.
Profiting from online misogyny
In her book “Misogyny on the Internet”, Mariana Valente describes how misogyny needs to cross the discussion about business models in the digital environment. She makes evident that choices over platform and algorithm design are based on visibility. Yet, visibility is often set by market interests, not users, consumers, or individuals benefit. Such systems of ‘visibility’ reward viral, divisive, and sensationalist content, which is gendered in nature, to keep users engaged with highly personalized feeds, amplifying the vulnerability of women and gender diverse people as a consequence.
The Center for Countering Digital Hate (CCDH) estimates that YouTube earned £3.4 million (€ 3.9 million) in ad revenue just from Andrew Tate’s videos, one of the most prominent figures in a networked manosphere, of online misogynistic communities, some of which openly and actively condone violence against women.
This also feeds a larger and growing phenomenon that thrives on platform complacency: the manosphere. The manosphere, as defined by UN Women, is a globally connected network of online communities which are opposed to feminism and gender equality, functioning as an ecosystem of hostile ideologies that travels across borders. While its reach is global, the playbook is localized through language and motifs to fit regional context, and frame feminism as foreign intrusion, destructive to the family unit and anti-religious. Platforms exploit this content for their own financial gain, completely disregarding the impact this has on women and gender minorities both online and offline.
In our own research, undertaken in 2025, geared towards examining the impacts of TFGBV in West Asia and North Africa (WANA) on women media-makers, the findings spoke to a larger global reality, in which digital media ecosystems align with conservative agendas, and the broader rollback of trust and safety policies. Women journalists, content creators, activists, and social workers are routinely subjected to coordinated attacks designed to damage their reputation, silence them, and drive them away from reporting on crucial information, especially related to sexual and reproductive health and rights (SRHR), a topic often deemed as immoral (‘haram’) or damaging to society’s values.
These coordinated and targeted attacks deploy a recognizable set of techniques: verbal abuse, sexualised harassment, and explicit threats delivered through comments and private messages, fabricated rumours and false moral accusations engineered to destroy credibility, and the deliberate dissemination of unverified personal information to intimidate and expose (doxxing). Content is manipulated, images are doctored, and disinformation is spread through fake accounts and coordinated networks, all with the calculated goal of making the cost of speaking publicly about topics that are arbitrarily labelled as immoral extremely high.
Subsequent research undertaken in partnership with a team of Iraqi researchers, during the November 2025 elections, found that women were subjected to similar systemic and coordinated disinformation campaigns targeting not their policies, but their identities, appearances, reputations, and perceived violations of traditional gender roles. These campaigns often leveraged tribal, political, and religious networks, exploiting online platforms, such as Facebook, where regulation and accountability remain limited. These attacks were wide-ranging: derogatory comments targeting physical features, moral accusations rooted in religious and cultural norms, and fabricated content designed to dehumanize and discredit. Women politicians interviewed for the research believe that the attacks were partially, or fully, coordinated by political rivals, intra-party opponents, and organized networks. Mainstream media compounded this damage, amplifying harmful narratives through sensationalist headlines and clickbait content, rather than challenging them in line with ethical journalistic standards.
In Tunisia, where we undertook similar research, qualitative research reveals that attacks on women media-makers online also tend to target their appearance, clothing and character. In particular, women media-makers who discuss issues like gender and sexuality are framed as immoral and their identities are weaponised to defame and discredit their work and opinions. These attacks are frequently couched in religious language that portrays their advocacy as “foreign propaganda” that threatens cultural and religious values. Feminists and SRHR advocates, who are active online as content creators and influencers, are systematically demonized as extremists and corruptors.
The consequences of this are far-reaching: women report softening or self-censoring their messaging to not receive backlash, avoiding certain topics entirely, and withdrawing in part or fully from public engagement. This is part of a larger problem of media viability, particularly for women media-makers, who because they talk about taboo or sensitive topics are unable to continue doing their work, thereby impacting their visibility and ability to monetise their content online. These spillover effects of GBD (gender-based disinformation), and TFGBV more broadly, therefore, have layered impacts, and there is an urgent need to document and address these in both the short and long-term.
Shadow-banning and online censorship on the basis of gender is another face of TFGBV, but much harder to quantify and address. In December of 2025, the public got a brief insight into it, when more than 50 organisations working globally on sexual and reproductive rights reported being shut down across Meta’s platforms, especially Facebook, Instagram, and WhatsApp. Abortion hotlines were blocked in countries where abortion is legalized, at the same pace as queer and sex-positive accounts being banned. As reported by The Guardian, in trying to address these issues Meta showed a dismissive behaviour, while alluding to internal difficulties with content moderation policies. However, a Meta employee was reported as giving advice to an affected organisation saying: “move away from the platform entirely and start a mailing list, saying that bans were likely to continue.” Meta has officially denied sending this message.
The collective impact – and collective solutions
Across our different research projects and programmes, the fear or tendency towards silence as the only way to have a sense of safety is latent. Censorship often gets framed as self-censorship. The difficulty around this framing is that it still carries a component of choice among the victims, while it fails to acknowledge the patriarchal norms, biases and structures that are encoded in our digital media ecosystems, that actively enable and incentivise harmful speech and actions, such as TFGBV.
To dismantle the structural harms of TFGBV and GBD, we must shift the burden of safety from individual resilience to systemic accountability. This starts by centring women and gender-diverse media makers, and their experiences, in the design and implementation of solutions. A drive towards policy from their lived experience is much more effective as prevention than reactive measures.
Building safe spaces as a strategic effort for people affected to vocalize their experiences is a measure that directly impacts the chilling effect that leads to self-censorship. It builds a collective basis of accessible evidence that challenges dominant and patriarchal narratives. Ultimately, it challenges the profitable and excluding aspects of digital ecosystems by providing alternatives that are equitable and transparent.
In her book Living a Feminist Life, Sara Ahmed writes: “Hope is not at the expense of struggle but animates struggle; hope gives us a sense that there a point to working things out, working things through.” For far too long women and gender minorities have had to absorb the tensions and animosities of online spaces, and have had to construct, and constantly reconstruct, appropriate strategies to mitigate the impacts on their mental health, professional paths, and standing in society. In a healthy digital media ecosystem, this should not be their work to do. There is a pressing need to come together, pool resources and work to create a digital ecosystem that women and gender minorities don’t have to pay for with their safety, silence or mental health.