Deepfakes are everywhere in the news these days. These AI-generated videos and images can make anyone look like they’re saying or doing things they never did. While people often worry about deepfakes being used for fake political speeches or spreading lies, it seems the biggest problem is something else entirely.

The Explosive Growth of Deepfakes

Recent reports reveal a massive rise in deepfakes. The number of deepfake files shared online jumped from around 500,000 in 2023 to a projected 8 million by the end of 2025. And the vast majority, 96–98%, of this content is non-consensual intimate or pornographic material, with 99–100% of the victims being women.

Most Victims Are Women

A new academic paper by researcher Dana Mahr from the Karlsruhe Institute of Technology takes a close look at this trend. Published in the journal AI & Society, the study explains that sexualized deepfakes are not just about misinformation. They are a modern form of image-based sexual abuse that builds on long-standing patterns of gender-based control and harm.

The Real Damage: Visual Coercion

The real damage comes from what the researcher calls “visual coercion.” Perpetrators use AI to put a woman’s face and body into explicit sexual situations without her permission. Even when everyone knows the content is fake, the harm remains real- it causes humiliation, takes away control over one’s own image, and can damage reputations and careers.

This is especially dangerous because it’s so easy to do. In the past, creating revenge porn usually required private photos from a personal relationship. Now, anyone can generate convincing fake explicit content using nothing more than public photos from social media. Public figures such as journalists, politicians, and activists are frequent targets, but ordinary people are at risk too.

Why Platforms Make It Worse

Social media platforms make the situation worse. Their systems are built to push content to as many people as possible, so harmful deepfakes spread quickly and widely. Anonymity on these platforms protects the people creating the content, making it hard to hold them accountable. When platforms do ban this material, it often just moves to other sites.

The Gaps in Laws and Technology

Current laws are starting to catch up in some countries by addressing non-consensual deepfake pornography. But there are big gaps. Victims often struggle to identify the creator, especially when they are anonymous or based in another country. Legal processes can be slow, expensive, and place most of the burden on the person who was harmed.

Technical tools that try to detect deepfakes are improving, but they have limits. Spotting that something is AI-generated doesn’t erase the emotional or reputational damage. Plus, the content can keep reappearing online, causing repeated trauma.

What Needs to Change

The study calls for a smarter, more complete response that puts consent at the center. This means:

  • Holding platforms more responsible for preventing and quickly removing harmful content
  • Updating laws to better protect people’s identity and autonomy, even when the images are AI-generated
  • Creating better technical safeguards, like tools that track the origin of images, such as our Veritas and Deepface, and verify consent
  • Encouraging broader cultural changes around respect for consent and personal images

Simply telling individuals to be more careful online or to reduce their digital footprint isn’t enough. It shifts the responsibility onto potential victims instead of fixing the systems that enable the abuse.

At vali.now, our cybersecurity work focuses on helping people and organizations stay safe as AI evolves. The surge in deepfakes shows that we need stronger cooperation among lawmakers, tech companies, researchers, and users. As these tools become more powerful and easier to access, closing the gaps in laws, platforms, and AI governance is urgent.

We must treat sexualized deepfakes as the serious form of gendered abuse they are – not just as a technical curiosity or a minor side effect of AI progress.

What are your thoughts? Have deepfakes affected you or someone you know? Share your experiences in the comments below, and feel free to reach out to our team at vali.now for practical advice on protecting your digital identity.

Leave a comment

Your email address will not be published. Required fields are marked *