top of page

How AI-Generated Deepfakes Reflect a Growing GBV Crisis Online

  • Writer: imaarafoundation
    imaarafoundation
  • Mar 4
  • 3 min read

"Hello there. This article aims to help you understand how AI-generated deepfakes and “nudification” tools, are contributing to a growing crisis of technology-facilitated gender-based violence (GBV). If your image has been altered or shared without your consent, the shock, fear, anger, or humiliation you may be feeling is valid; digital abuse is real harm. If you are supporting a survivor, feeling overwhelmed or unsure is also completely normal. As we unpack this issue, we center consent, dignity, and survivor rights, because innovation should never come at the cost of safety. If you need support or guidance, organizations like Imaara Foundation are here to help."


Written by: Zaid Nayeemi


An incident on X (formerly Twitter) in late December 2025, had once again highlighted the urgent need to protect individuals from technology-facilitated abuse.


Users of the platform misused its artificial intelligence tool, Grok, to generate sexually explicit and “nudified” images from photographs shared online. Many of the images were created and circulated without the knowledge or consent of the people depicted. Survivors reported feelings of humiliation, fear, violation, and helplessness as altered images of them spread rapidly across digital spaces.


While the platform has since restricted certain image-generation features following public backlash, the harm caused by such content does not disappear when a feature is disabled. For survivors, the emotional and psychological impact can endure long after the posts are removed.


Beyond One Incident: Technology and Gender-Based Violence:


This episode is not an isolated controversy. It reflects a broader pattern of gender-based violence (GBV) increasingly facilitated by digital tools. Non-consensual intimate imagery, deepfake “nudification,” online harassment, and doxxing are forms of abuse that can affect people of all genders, ages, and backgrounds.


When AI tools are misused to alter someone’s image without consent, the act can mirror offline violations of bodily autonomy. The digital nature of the abuse does not make it less real. Survivors often describe:


  • Loss of control over their own image and identity

  • Anxiety about professional and personal repercussions

  • Social stigma and reputational harm

  • Fear of continued targeting


The scale at which generative AI can produce and distribute such content makes the impact particularly severe. Researchers monitoring deepfake activity have documented surges in the production of “nudified” images during short time spans, raising concerns about how quickly such abuse can escalate before safeguards are enforced.


The Responsibility of Platforms and Institutions:


Authorities in several countries have raised concerns about whether sufficient safeguards were in place to prevent misuse. In India, the Ministry of Electronics and Information Technology formally communicated with X regarding compliance with national laws after non-consensual content surfaced. The platform later stated it would strengthen enforcement and remove accounts and content found to violate regulations.


However, legal compliance alone is not enough. Survivor-centric responses require:


  • Proactive design safeguards that prevent misuse before harm occurs

  • Rapid reporting and takedown mechanisms

  • Trauma-informed support pathways for affected individuals

  • Clear accountability structures for repeat offenders

  • Technology companies, policymakers, civil society, and users all share responsibility in ensuring that innovation does not come at the cost of human dignity.


A Pattern That Predates AI


Even before the rise of generative AI tools, online spaces have seen repeated instances of image-based abuse. In 2021 and 2022, platforms were used to target individuals by misusing publicly available photos in degrading ways. Although legal action followed, survivors have continued to report similar violations across platforms.


The continuity of these incidents shows that while tools may evolve, the underlying issue remains: the normalization of exploiting someone’s image without consent.


Centering Survivors:


Discussions about AI misuse often focus on technological capability, regulatory gaps, or corporate accountability. These are important conversations. But at the center must be the people whose images, identities, and sense of safety are affected.


A survivor-centric approach shifts the narrative:

  • From spectacle to impact

  • From blame to accountability

  • From outrage to sustained prevention


Gender-based violence, whether facilitated offline or online, is about power, consent, and dignity. AI-generated deepfakes are not merely a technological glitch; they are part of a broader ecosystem in which digital tools can be weaponized against individuals.


As societies grapple with rapid technological advancement, the fundamental principle must remain clear: consent, autonomy, and safety are not optional features. They are rights.

Comments


bottom of page