Reading Time: 16 min | Last Updated: February 25, 2026
Everyone on the Video Call Was Fake. The $25 Million Was Real.
The employee at Arup Engineering in Hong Kong didn't suspect a thing.
He'd been invited to a video conference with senior executives — including the company's Chief Financial Officer. The faces were familiar. The voices matched. The background looked like the same office he'd visited dozens of times. The CFO explained that a confidential acquisition required an urgent wire transfer. The other executives on the call nodded in agreement.
The employee followed instructions. He initiated the transfer. $25 million — gone to an offshore account in minutes.
Every single person on that video call was a deepfake. The CFO. The other executives. All of them. AI-generated faces mapped onto live video feeds, with cloned voices delivering scripted instructions in real time.
By the time the real CFO learned about the transfer, the money was gone.
This isn't science fiction. This happened in 2024. And it's just one story in a tsunami of deepfake incidents that is fundamentally changing what it means to trust what you see and hear.
The Deepfake Explosion — By the Numbers
| Statistic | Data |
|---|---|
| Deepfakes created per month (2026) | 900,000+ |
| Growth since 2023 | +550% |
| Deepfake files online (end of 2025) | 8 million (16x increase from 2023) |
| Deepfake fraud incidents growth | +3,000% since 2023 |
| Americans exposed to deepfakes daily | 2.6 per person |
| Businesses reporting deepfake fraud (2024) | 49% |
| Global financial losses from deepfake fraud (2026) | $4.7 billion |
| Human ability to spot high-quality deepfakes | 24.5% (worse than a coin flip) |
| Companies with formal anti-deepfake protocols | Only 13% |
Sources: WasItAIGenerated, VerifyReal, Programs.com, SQ Magazine
Read that detection rate again. Humans correctly identify high-quality deepfakes only 24.5% of the time. You'd do better flipping a coin. That's not a security gap — that's a chasm.
The Three Battlegrounds: How Deepfakes Are Reshaping Reality
Battleground 1: Democracy and Elections
This is where deepfakes do the most long-term damage — not by stealing money, but by stealing trust.
32% of all deepfake incidents target politics and elections — the largest single category.
Real incidents:
- US, 2024: A deepfake robocall impersonating President Biden urged New Hampshire voters not to vote in the primary. Thousands received the call. It sounded exactly like the President.
- Canada, 2025: Deepfake videos of Liberal leader Mark Carney circulated on social media, promoting a fake cryptocurrency scam designed to look like a policy announcement.
- Romania, Czech Republic, Turkey, Argentina: Deepfake videos of candidates and leaders used for character assassination, fake endorsements, and manufactured scandals — all in election seasons.
- 38 countries documented political deepfakes between 2023 and 2025, targeting heads of state, candidates, and journalists.
But here's the truly insidious part — something researchers call the "Liar's Dividend."
As people become aware that deepfakes exist, a new weapon emerges: politicians can dismiss real footage as fake. Caught on camera saying something controversial? "That's a deepfake." Genuine evidence of corruption surfaces? "AI-generated disinformation." The existence of deepfakes doesn't just create fake truth — it lets people deny real truth.
That's not a technology problem. It's a democracy problem.
Battleground 2: Business and Financial Fraud
The Arup heist was the headline. But it's the tip of the iceberg.
| Attack Type | How It Works | Average Loss |
|---|---|---|
| CEO video fraud | Deepfake video call impersonating executives to authorise transfers | $500K – $25M+ |
| Voice cloning BEC | AI-cloned voice of CEO/CFO calls finance team with urgent wire request | $100K – $680K |
| Deepfake job candidates | Fake candidates use deepfake video and AI voices to pass remote interviews | Insider access, data theft |
| Contact centre fraud | Cloned voices bypass voice-based biometric authentication at banks | $44.5B industry-wide (2025) |
53% of finance professionals report being directly targeted by deepfake voice fraud. And Experian now warns that deepfake job candidates — people who don't actually exist, interviewing via AI-generated video — are a top emerging fraud risk for 2026.
Think about that for a second. A company could hire someone who literally doesn't exist. That "employee" gets access to internal systems, customer data, source code — and they're actually a threat actor operating from the other side of the world.
Battleground 3: Personal Harm and Everyday Deception
Deepfakes aren't just a corporate or political problem. They're destroying lives at the individual level:
- Non-consensual deepfake imagery: AI-generated explicit content using someone's face — overwhelmingly targeting women. This has exploded in scale and is now one of the most common uses of deepfake technology.
- Virtual kidnapping scams: The FBI has warned about calls where an AI-cloned voice of a family member begs for help — "Mom, I've been kidnapped, please send money" — using voice samples from social media videos.
- Romance and investment scams: Deepfake video calls make online scammers infinitely more convincing. Victims believe they're in a video relationship with a real person.
Americans encounter an average of 2.6 deepfakes per day. Most of the time, they don't even know it.
Can We Detect Deepfakes? (The Arms Race)
| Detection Method | Accuracy | Limitation |
|---|---|---|
| AI detection (video) | 96% | Drops to ~78% in real-time; 65% with low-quality source |
| AI detection (images) | 98% | Still improving; new generation models evade older detectors |
| AI detection (audio) | 92% | Drops to 58% for audio-only deepfakes |
| Human detection | 24.5% | Worse than a coin flip for high-quality deepfakes |
The technology to create deepfakes and the technology to detect them are locked in a constant arms race. Every time detection improves, creation tools evolve to circumvent it. And right now, creation is winning.
"Deepfake-as-a-Service" — affordable, user-friendly tools that let anyone with a laptop create convincing deepfakes — is now a reality. You no longer need technical expertise. You need $20 and ten minutes.
How to Protect Yourself, Your Business, and Your Sanity
For Individuals:
- Adopt the "Verify Through a Separate Channel" habit. If someone calls asking for money — even if they sound exactly like your family member — hang up and call them back on their known number. Same for any unusual request from a "boss" or "colleague." This is the 5-Second Pause applied to deepfakes.
- Establish a family safe word. Pick a code word that only your family knows. If someone calls claiming to be in danger, ask for the word. No AI can guess it.
- Be mindful of your digital footprint. Every video you post, every audio clip you share — it's training material for someone who might want to clone you. You don't have to live in fear, but be aware of what you're feeding the machine.
- Use multi-factor authentication everywhere. Deepfakes fool humans. They don't fool hardware security keys. Cryptographic verification doesn't care how real a face looks.
- Question viral content. Before you share that shocking video of a politician — check the source. Check fact-checking sites. Check if any reputable news outlet has verified it. The goal of political deepfakes is to be shared before they're verified.
For Businesses:
- Implement verification protocols for financial transactions. No wire transfer above a threshold should be authorised based solely on a video or phone call. Require multi-person approval through separate, authenticated channels.
- Train employees specifically on deepfake threats. Your social engineering training needs to include deepfake scenarios. Show employees what AI-generated video and audio look like. Run simulations.
- Deploy deepfake detection tools. AI-powered detection solutions for video conferences and communications are now commercially available. They're not perfect, but they catch many fakes that humans miss.
- Verify job candidates in-person. For remote hiring, require at least one stage that includes verified identity — government ID check, in-person meeting, or notarised verification.
- Establish anti-deepfake policies. Only 13% of companies have formal protocols. Be in that 13%. Define how your organisation verifies identity in high-stakes communications.
The Bottom Line
I want to leave you with a thought that keeps me up at night — and then one that helps me sleep.
The scary thought: we are entering an era where seeing is no longer believing. Video evidence, voice recordings, live video calls — the things humans have relied on for centuries to establish truth — can now be fabricated from scratch in minutes. That changes everything. It changes how we trust information, how we verify identity, how we conduct business, and how we run democracies.
The hopeful thought: every time in history that a new deception tool has emerged, society has eventually developed antibodies. Photography was once considered impossible to fake — then Photoshop arrived, and we learned to question photos. Email was once trusted implicitly — then spam arrived, and we built filters. We'll develop antibodies to deepfakes too. Better detection. Better verification protocols. Better media literacy. Better laws.
But we're in the messy middle right now. The creation tools are ahead of the detection tools. The scammers are ahead of the awareness. The technology is ahead of the regulation.
So until the world catches up, your best defence is the simplest one: when something feels urgent, important, and unexpected — pause. Verify. Use a separate channel.
That Arup employee? He'd seen the CFO's face. He'd heard the CFO's voice. Everything looked right. The only thing that could have saved him was a process that said: "Before transferring $25 million, call the CFO directly on a verified number."
Trust, but verify. In the deepfake era, that's not paranoia. That's survival.
Continue the series: antivirus, Zero Trust, 10 mistakes, ransomware, VPN vs Zero Trust, social engineering, password managers, supply chain, MFA, WiFi security, encryption, dark web, privacy, AI cybersecurity, quantum, firewalls, cloud security, small business, IoT security, phishing, cyber insurance, and biometrics.
— Harsh Solanki, Founder of FutureInsights.io
Frequently Asked Questions
Can I make a deepfake of someone? Is it legal?
The technology is freely available — many deepfake tools cost as little as $20 or are free. Legality varies dramatically by jurisdiction and intent. Creating non-consensual explicit deepfakes is illegal in many countries and US states. Using deepfakes for fraud is criminal everywhere. Political deepfakes occupy a murky legal grey zone — some jurisdictions are passing specific laws, but enforcement is inconsistent. Just because you can doesn't mean you should. The ethical and legal risks are significant and growing.
How can I tell if a video is a deepfake?
Humans only spot high-quality deepfakes 24.5% of the time, so visual detection is unreliable. That said, look for: unnatural eye movements or blinking, inconsistent lighting between face and background, slight blurring around the jawline or hairline, audio that's slightly out of sync with lip movements, and unusual skin texture. For higher confidence, use AI-powered detection tools or reverse-image search the source. But the most reliable protection isn't detection — it's verification through a separate channel.
What is the "Liar's Dividend"?
The Liar's Dividend is the phenomenon where the existence of deepfakes allows people to dismiss genuine evidence as fake. A politician caught on camera making a controversial statement can simply say "that's a deepfake." This is arguably more dangerous than deepfakes themselves because it undermines the concept of evidence entirely. When anything can be called fake, nothing can be proven real. This erosion of shared truth is one of the deepest threats deepfakes pose to democracy.
What is Deepfake-as-a-Service?
Deepfake-as-a-Service (DFaaS) refers to commercial tools and platforms that make creating deepfakes easy and affordable for anyone. These services offer AI face-swapping, voice cloning, and video manipulation for as little as $20 — no technical expertise required. Some are marketed for entertainment, but they're widely used for fraud, disinformation, and harassment. Forbes identified DFaaS as a top emerging cybersecurity threat for 2026, alongside agentic AI attacks.
Can deepfakes affect my credit or identity?
Yes. Deepfakes are increasingly used in identity fraud — creating fake video verifications to open bank accounts, bypass KYC (Know Your Customer) checks, or apply for loans in someone else's name. Combined with stolen personal data from dark web databases, a deepfake of your face paired with your personal details could be used to impersonate you for financial fraud. Using strong identity verification (passkeys, hardware keys) and monitoring your credit report are essential defences.
What should I do if someone creates a deepfake of me?
Document everything — save the content, note where you found it, capture URLs and screenshots. Report it to the platform hosting the content (most major platforms have deepfake/AI content reporting policies). File a report with law enforcement — especially if the content is explicit, defamatory, or used for fraud. Contact a lawyer if necessary — laws around deepfakes are evolving rapidly, and many jurisdictions now provide legal remedies. If it's being used for financial fraud, alert your bank and freeze your credit immediately.
📚 Further Reading & Research
- Deepfake Statistics 2026 — WasItAIGenerated
- Deepfake Statistics 2026 — VerifyReal
- Deepfake Statistics & Trends — Keepnet Labs
- Deepfakes-as-a-Service 2026 — Forbes
- Deepfake Fraud Case Studies 2025 — GAFA
- Deepfake Propaganda & Election Integrity — AI Competence
- Political Deepfakes Report — Recorded Future
- Deepfake Statistics 2026 — SQ Magazine
- Deepfake Facts & Statistics 2026 — Programs.com