A new report by Gatefield has warned that up to 70 million women and girls in Nigeria could be exposed annually to artificial intelligence–facilitated online abuse by 2030, if urgent legal and regulatory safeguards are not introduced.
The report, titled “Industrialised Harm: The Scale of AI-Facilitated Violence in Nigeria”, revealed that about 30 to 35 million women and girls could be directly targeted each year through deepfakes, impersonation, and coordinated harassment campaigns if urgent regulatory action is not taken.
Gatefield said it based its projections on Nigeria’s rapid internet growth, observed patterns of online harm, and the scaling effect of generative AI technologies.
The estimates, the organisation noted, draw on it’s State of Online Harms 2025, publicly available data, and predictive modelling grounded in conservative assumptions.
|
|
|---|
Nigeria’s expanding internet population
Analysing the findings, PREMIUM TIMES found that Nigeria’s internet population is projected to reach nearly 200 million users by 2030, with women accounting for almost half of users.
Gatefield’s 2025 data shows that nearly half of Nigerian internet users experience online harm, with women representing 58 per cent of victims.
The report distinguishes between general exposure to harmful content and direct targeting, defined as deliberate AI-facilitated abuse such as non-consensual sexual imagery, impersonation, and coordinated disinformation campaigns.
Further analysis of the findings shows that while 50–60 per cent of women and girls online may be exposed to harmful content annually, 25–30 per cent could face direct AI-enabled targeting, even under conservative modelling.
Psychosocial toll of AI-enabled abuse
Beyond reputational and economic damage, the report highlights severe psychosocial consequences for women targeted online.
It references research on Nigerian women affected by non-consensual image sharing, which found that nearly 90 per cent experienced depression and suicidal thoughts, with several respondents considering or attempting suicide.
The report notes that generative AI intensifies these harms by increasing the speed, realism, and reach of abusive content, often overwhelming victims’ capacity to respond.
Illustrative cases of AI-facilitated gender violence
The report highlights recent Nigerian cases to illustrate how AI-facilitated abuse operates in practice and how enforcement gaps persist.
It cited the case of Ayra Starr, whose image was used to generate a digitally nude deepfake using X’s AI chatbot, Grok. Although users reported the content, it spread across multiple platforms before the account involved was suspended, with no criminal investigation initiated.
The report also referenced Natasha Akpoti-Uduaghan, the senator representing Kogi Central, who was targeted with deepfake audio recordings and a fabricated video during her public allegations of sexual harassment against the Senate President, Godswill Akpabio.
Gatefield said the timing and targeted nature of the content pointed to the strategic use of AI to undermine women’s credibility.
Another case involved Kehinde Bankole, the Nollywood actor, who was targeted with AI-generated nude deepfakes in 2025.
Gatefield noted that her experience reflects a broader pattern of abuse enabled by weak moderation and legal gaps, with generative AI tools used to exploit publicly available images.
Legal gaps, Institutional oversight
Gatefield found that Nigeria lacks a coordinated legal and institutional framework to address AI-facilitated abuse.
The report said the country has no AI-specific governance framework, no clear legal definition of deepfakes or synthetic media, and no explicit recognition of AI-enabled gender-based violence as a distinct harm.
Existing cybercrime and criminal laws, it noted, were designed for individual actors and intent, not automated systems that amplify abuse across platforms and borders.
Institutional oversight was also found to be fragmented.
Agencies such as the National Information Technology Development Agency (NITDA), the Nigerian Communications Commission (NCC), law enforcement agencies and the Ministry of Justice operate without clear mandates or coordination on AI-related harms, leaving victims with limited and often ineffective avenues for redress.
Global responses and lessons for Nigeria
The report compared Nigeria’s regulatory environment with jurisdictions that have adopted stronger safeguards.
In the European Union, the AI Act and General Data Protection Regulation (GDPR) impose mandatory transparency and financial penalties.
Also, France criminalises non-consensual sexual deepfakes, with fines and prison terms.
The United States’ TAKE IT DOWN Act mandates rapid takedowns, while the United Kingdom’s Online Safety Act establishes a platform duty of care.
Gatefield said these examples demonstrate global recognition of AI-facilitated harm as an urgent regulatory issue, while Nigeria currently lacks dedicated legislation, forensic capacity and enforcement infrastructure.
Policy and regulatory implications
Speaking to PREMIUM TIMES on the findings, Adewunmi Emoruwa, chief executive officer of Gatefield, said artificial intelligence has fundamentally altered the nature of online harm in Nigeria.
Mr Emoruwa said AI tools have lowered the cost of abuse, accelerated its spread, and made it more psychologically damaging, particularly for women in public life.
“Tools like deepfake generators allow ordinary users to produce convincing, intimate, and reputationally damaging content at scale, often targeting women in public life,” he said.
Mr Emoruwa noted that Nigeria’s legal framework remains ill-equipped to deal with automated systems that enable abuse at scale, leaving victims to navigate slow and opaque reporting processes.
He stressed the need for binding platform obligations, clearer legal definitions, and time-bound redress mechanisms to shift responsibility away from victims and towards the systems that enable harm.
Preparedness and shared responsibility
Shirley Ewang, advocacy lead at Gatefield, also told PREMIUM TIMES that artificial intelligence is intensifying online harm in Nigeria by making abuse cheaper, faster, and harder to trace.
Ms Ewang said deepfakes, disinformation, and coordinated harassment campaigns are increasingly deployed against women in politics, media, and activism, at a time when Nigeria’s digital population continues to grow.
She said Nigeria remains largely unprepared to respond, citing the absence of AI-specific legal frameworks, weak platform accountability, and slow redress mechanisms.
She added that addressing AI-enabled harm requires coordinated action by government, technology platforms, and civil society, including enforceable regulation, survivor-centred responses, and large-scale digital literacy initiatives.
“Together, these actors can shift Nigeria from a reactive posture to a preventive, rights-based approach to AI governance,” she said.
Policy recommendations
Based on its findings, Gatefield urged Nigerian authorities to introduce mandatory guidelines on AI-facilitated violence, including clear provisions addressing non-consensual sexual imagery, impersonation, and AI-driven disinformation.
The report also called for clear legal definitions of deepfakes and synthetic media, alongside binding obligations for platforms operating in Nigeria to label AI-generated content, conduct periodic algorithmic risk assessments, and publish regular transparency reports detailing takedown actions and moderation outcomes.
READ ALSO: I’m tired of begging; please, I need money for my brain, spleen surgery — Halima Abubakar to Nigerians
In addition, Gatefield recommended accessible reporting and redress mechanisms, enforceable 24–48-hour takedown timelines for harmful AI-generated content, and specific safeguards for women, children, and other vulnerable groups.
The report noted that distinguishing between general exposure to harmful content and direct, targeted abuse is critical to designing effective responses, ranging from content moderation and digital literacy to enforceable legal frameworks, forensic investigation, and victim redress.
![At 3-33 on 9th oct, some children Playing inside Aayin Camp Benue [Photo Credit Popoola Ademola Premium Timesv]](https://i0.wp.com/media.premiumtimesng.com/wp-content/files/2026/03/WhatsApp-Image-2026-03-07-at-05.54.10.jpeg?resize=360%2C180&ssl=1)










![Governor Alex Otti of Abia state [PHOTO CREDIT: Alex C. Otti]](https://i0.wp.com/media.premiumtimesng.com/wp-content/files/2026/03/592055144_1377089137198901_544859551774196214_n.jpg?fit=1188%2C792&ssl=1)













