In an era where artificial intelligence is reshaping how we consume information, deepfakes – AI-generated videos, audio, or images that mimic real people with eerie accuracy – pose one of the most insidious risks to journalism. These synthetic media can fabricate statements, actions, or events that never occurred, undermining reporters’ credibility and distorting public discourse.
Drawing from a comprehensive analysis by Reporters Without Borders (RSF), along with insights from similar studies, this post explores the mounting challenges and damages deepfakes inflict on the field, particularly on journalists, and highlights real-world examples that illustrate the urgency of the issue.
The Scope of the Threat: Insights from RSF and Beyond
According to RSF’s examination of 100 deepfake incidents targeting journalists across 27 countries between December 2023 and December 2025, the problem is escalating rapidly. A staggering 74% of the victims were women, with 13% of those cases involving pornographic deepfakes – a form of gender-based digital violence that amplifies harassment and cyberbullying. These manipulations not only defame individuals but also manipulate public opinion, spread disinformation, and threaten physical safety. For instance, deepfakes have been used to fabricate anti-vaccination claims or political conspiracies, leading to real-world fallout like scams and audience manipulation.
This aligns with findings from other sources. A Pindrop analysis emphasizes how deepfakes erode media credibility, accelerate the spread of misinformation, and diminish public trust in journalism. Similarly, a Brookings Institution piece warns of an “uncertain future of truth,” where deepfakes turn fiction into apparent fact, potentially influencing elections and national security. The Nieman Lab has urged newsrooms to prepare by focusing on detection and mitigation strategies, noting that while the threat is real, it’s often overstated and can be countered through targeted interventions. A UNESCO report frames deepfakes as a “crisis of knowing,” projecting massive growth in AI-driven fraud and societal disruption, with fraud losses potentially reaching $40 billion by 2027. These reports collectively underscore that deepfakes aren’t just a technological novelty; they’re a tool for fraud, defamation, and broader societal harm.
The Damages: From Personal Harm to Societal Erosion
Deepfakes inflict multifaceted damage on journalists and the public. On a personal level, they expose reporters to harassment, especially women, who face disproportionate targeting through sexualized content. This can lead to smear campaigns, doxxing, and even physical threats, forcing some journalists to alter their work habits or step back from public-facing roles. For example, victims report receiving floods of scam-related complaints or enduring stalled investigations into the perpetrators.
Broader damages include the erosion of trust in the media. When audiences can’t distinguish real reporting from fabrications, it fosters skepticism toward legitimate journalism, amplifying disinformation during critical events like elections or conflicts. In political contexts, deepfakes have been linked to election interference, as seen in Slovakia and Nigeria, where AI audio rigged narratives or raised alcohol prices falsely. Economically, they enable scams, including a $25.6 million fraud case involving an AI-generated video call. The “liar’s dividend”—where real content is dismissed as fake—further complicates accountability, as evidenced in Turkey’s elections, where a genuine compromising video was labeled a deepfake.
In journalism specifically, deepfakes blur the line between fact and fiction, making fact-checking harder and potentially leading to stock market dips or public panic, as with a fake Pentagon explosion image. As AI tools become more accessible, the volume of such content surges, with over 500,000 video and voice deepfakes shared online in 2023 alone.
Real-World Examples of Deepfaked Journalists
The RSF report provides stark illustrations, but additional cases from global media highlight the pervasiveness. Here are some notable instances of well-known journalists being deepfaked:
• Cristina Caicedo Smit (Voice of America): In February 2025, her image and voice were replicated in videos attacking Donald Trump and Elon Musk while defending USAID, portraying VOA as politically biased.
• Pedro Benevides (TV1, Portugal): A Facebook deepfake used his face and voice to spread anti-vaccination conspiracy claims about government-pharma collusion, deceiving viewers despite later debunking.
• Leanne Manas (South African broadcaster): Targeted in multiple scam ads for pharmaceuticals and cryptocurrencies, leading to up to 50 daily complaints and police visits to her workplace.
• Rana Ayyub (Indian journalist): Subjected to a pornographic deepfake smear campaign after advocating for justice in a child rape case, combined with doxxing that led to harassment.
• Julia Mengolini (Futurock, Argentina): Victimized by a violent pornographic deepfake depicting an incestuous scenario, amplified by political figures like President Javier Milei.
• Anderson Cooper (CNN): A deepfake video had him disparaging Donald Trump in crude terms, shared by Trump himself on social media.
• Gayle King (CBS Mornings): Appeared in an AI-generated clip promoting a product she never endorsed, circulating widely online.
• Clarissa Ward (CNN): Fake audio overlaid on real footage of her reporting from the Israel-Gaza border, undermining her coverage.
• Richard Engel (NBC) and Yalda Hakim: Both deepfaked in content suggesting geopolitical conflicts, like a fabricated Pakistan-India war, aimed at sowing doubt or confusion.
• FRANCE 24 Journalist: Impersonated in a deepfake claiming Emmanuel Macron sent troops to Ukraine, spread by pro-Russian outlets.
These examples reveal a pattern: deepfakes often target high-profile figures to maximize reach and impact, exploiting their credibility for malicious ends.
Combating Deepfakes: Tools and Strategies
Addressing this threat requires a multi-pronged approach. RSF recommends adopting traceability standards in newsrooms, certifying content on platforms, and creating specific laws against malicious deepfakes. News organizations like The Wall Street Journal and Reuters are training staff in detection, using algorithms to spot inconsistencies in audio, video, or imagery. And there are useful, freely available resources to help journalists spot fake images, such as Henk van Ess’s Image Whisperer tool.
At vali.now, we’re at the forefront of this fight with our image integrity products. Our Live Video Deepfake Detection tool helps identify manipulations in real time, while Ariane offers forensic analysis for law enforcement, and Veritas ensures integrity in scientific and journalistic contexts. If you’ve encountered suspicious media, forward it to us for a rapid verdict: Safe, Suspicious, or Confirmed Scam.
Conclusion
Deepfakes represent a profound challenge to journalism, amplifying damages from personal trauma to democratic erosion. As evidenced by RSF’s analysis and parallel reports, the threat is growing, but so are the solutions. By staying vigilant, adopting advanced detection tools, and supporting initiatives like those at vali.now, we can safeguard the truth. Subscribe to our newsletter for more insights on combating digital deception—and remember, in the age of AI, verification is key.
