Deepfake technology is rapidly becoming a common tool in sophisticated social engineering attacks targeting organizations.

These incidents often involve AI-generated audio and video to impersonate executives, exploiting human trust in visual and auditory cues during communications like video calls.

Arup Incident (January 2024)

A finance employee in the Hong Kong office of the British engineering firm Arup received phishing messages prompting a video conference for a confidential transaction. During the call, deepfakes impersonated the chief financial officer and other staff members. Convinced of the legitimacy, the employee authorized 15 transfers totaling approximately $25 million (HK$200 million) to five bank accounts in Hong Kong controlled by the fraudsters. No internal systems were compromised; the attack succeeded solely through deception.

Early Voice Cloning Incident (2019)

The CEO of a UK-based energy firm received phone calls featuring a voice cloned to mimic the German parent company’s chief executive. The impersonator discussed urgent supplier payments and instructed immediate transfers. This resulted in €220,000 (approximately $243,000) being sent to a fraudulent Hungarian account, marking one of the first documented uses of AI voice synthesis in large-scale corporate fraud.

Escalating Risks and Statistics

These cases underscore the escalating risks. Documented financial losses from deepfake-enabled fraud exceeded $200 million in the first quarter of 2025 alone. In 2024, businesses faced average per-incident losses of nearly $500,000, with larger enterprises experiencing up to $680,000.

We emphasize that such attacks occur more frequently than publicly disclosed.

Recommended Defenses

Key defenses include:

  • Requiring multi-person approval for significant transactions (“MultiSig”)
  • Using separate channels for verification of urgent requests (“Out-of-Band Verification”)
  • Training staff to scrutinize unexpected demands, regardless of apparent source authenticity

Maintaining oversight of unusual financial activity and prepared response protocols is critical. We continue monitoring these developments closely.

If in Doubt: vali.now

Your best defense is healthy skepticism. If something seems just a little off or a bit too good to be true – it probably is. Your best option is to forward the details to help@vali.now. Our cybersecurity professionals have been recognizing and fighting off such attacks for decades. Your first case (up to one hour of research) on our side is free, with affordable rates after that. You’ve got nothing to lose and you might just prevent catastrophic losses by reaching out.

2 Comments

  1. Pingback: The Impact of Generative AI on Trust - vali.now

  2. Pingback: The Rising Tide of Deepfake Fraud: Unmasking the Billion-Dollar Threat - vali.now

Leave a comment

Your email address will not be published. Required fields are marked *