Back to Articles
Thought Leadership6 min read

$15 and Half an Hour: What the Deepfake Fraud Economy Means for Your Onboarding Stack

AI-generated identity documents now cost as little as US$15 to manufacture and take roughly half an hour to produce. Real-time deepfake KYC bypass tools are commercially available. The article argues that the cost economics have inverted: biometric-first onboarding stacks are now the soft layer, and document-first verification with multi-source corroboration is structurally more resilient under current AI economics.

Fredrik Gröndahl
A passport laid open on a walnut desk with a small paper price tag resting on its corner, the figure $15 visibly handwritten in warm-orange ink, a fountain pen alongside, in soft north-facing window light.

It costs fifteen dollars and takes about half an hour to manufacture an identity document that will pass most biometric KYC checks deployed in financial services today. Sumsub, one of the larger identity verification vendors, reported the figure earlier this year. The number is not a marketing exaggeration. The same vendors who used to argue that biometric KYC made fraud impossible are now publishing the price list for the tools that defeat it.

This is what compliance officers and product leaders need to absorb in 2026. The economics that made biometric-first onboarding the smart upgrade five years ago have inverted. The architectural choice between biometric-first and document-first verification is no longer a question of user friction. It is a question of structural fraud resilience.

What the Numbers Actually Look Like

The World Economic Forum's January 2026 Cybercrime Atlas included a research publication that tested 17 face-swapping tools and 8 camera injection tools against current biometric KYC implementations. Most of the tools succeeded. The report concludes that synthetic faces, virtual camera tools, and API injection techniques together create a situation where every component of standard eKYC security can be defeated using common, commercially available software.

The OECD's AI Incidents Database registered a tool called JINKUSU CAM on April 6, 2026, which uses real-time deepfake facial and voice manipulation to bypass KYC controls on major crypto exchanges. The tool is operational, not theoretical. Its existence is an AI incident under the OECD framework specifically because the harm has already been realised.

Deloitte's Center for Financial Services projects that deepfake-driven fraud could cost the financial sector US$40 billion. Individual CEO-fraud incidents conducted via real-time deepfake video calls have exceeded US$25 million per event according to Europol's Internet Organised Crime Threat Assessment. FinCEN has issued advisories warning that suspicious activity reports increasingly describe deepfake-enabled fraud at onboarding.

These numbers describe a market, not a series of incidents. The fraud industry has industrialised the production of synthetic identities and synthetic video. The unit economics are now in the attacker's favour at almost every layer where biometric KYC tries to defend.

Why Biometric KYC Is the Layer That Got Disrupted

Biometric KYC was originally sold as the upgrade to document checks because faking a face was supposed to be hard. Faking a passport requires either a real passport that has been physically tampered with, or a high-quality printing operation with the right materials, or a sophisticated digital forgery that can pass machine-readable zone validation, holographic checks, and the increasingly common NFC chip read.

Faking a face used to require similar investment. A photorealistic synthetic face was expensive, time-consuming, and required technical skill. A real-time deepfake video that could pass liveness detection was beyond the reach of most fraud operations.

That is no longer true. The same trajectory that made image generation a consumer feature on smartphones also made photorealistic synthetic identity production a US$15 line item. The cost curve for attackers has collapsed at exactly the layer where biometric KYC is concentrated. Document fraud, by comparison, has not become meaningfully cheaper in the same period. The materials, the printing, the chip programming, and the corporate registry corroboration that follows a real document remain out of reach of the typical fraud operation, even one armed with generative AI.

The asymmetry is now reversed from the one most KYC architectures were designed for. The layer that was supposed to be hardest to attack has become the cheapest. The layer that was supposed to be easiest to bypass is, in practice, often the more resilient one.

The Architectural Fork

The choice that follows from this is structural and is being made every day, often without compliance teams realising they are making it.

A biometric-first onboarding stack treats the selfie, the liveness check, and increasingly the video call as the primary act of verification. Documents are ancillary, used to read identity attributes that get cross-referenced against the biometric. The trust model concentrates at the moment of capture.

A document-first onboarding stack treats the verified identity document, corroborated against independent sources, as the primary act of verification. Biometric checks are supplementary, used to bind a person to a document that has already been substantively verified. The trust model concentrates in the corroboration, not in the capture.

Under the cost curves of 2023, the biometric-first stack was reasonable. The selfie was harder to fake than the document. The corroboration step was expensive enough to want to skip. Under the cost curves of 2026, the biometric-first stack is increasingly the soft layer. A US$15 fake document feeding into a US$200 deepfake selfie can defeat the entire chain at almost any volume the fraud operation chooses to run.

The document-first stack does not have this property because each verification source the document is checked against is independent. A corporate registry, a sanctions database, a tax authority record, a previous regulated firm's verified file. The fraud operation has to defeat each one individually, and the cost of defeating them does not collapse when AI gets cheaper, because they are not AI-generated artifacts.

What This Means for Tooling Decisions

The vendor pitch in 2023 was "add liveness detection." That pitch is still being made in 2026 by some vendors, often with phrases like "advanced deepfake detection" or "behavioural biometric analysis" doing the work. These tools are not without value, and the deepfake detection arms race will continue. But the question that compliance teams should be asking is not how good their deepfake detection is. It is how exposed they are if their deepfake detection gets defeated next week.

The honest answer for a biometric-first onboarding stack is that the exposure is large, because the entire verification chain depends on a layer where attacker economics are improving faster than defender economics. The honest answer for a document-first stack with multi-source corroboration is that the exposure is structurally bounded, because each additional source the attacker has to defeat compounds linearly while the cost of defeating one source does not benefit from defeating any other.

For management companies, fund administrators, and TCSPs evaluating onboarding tooling, the practical question to ask of any vendor or platform is the architectural one. Where does the verification primarily live. In the moment of capture, or in the corroboration that follows. A vendor whose entire pitch is about better selfie analysis is selling at the layer that is being disrupted. A vendor whose pitch is about how documents are verified, versioned, corroborated, and made reusable across firms is selling at the layer that is structurally more durable.

Closing

The deepfake arms race is real, and it will continue. Defenders will keep improving detection. Attackers will keep improving generation. In the short term the cost asymmetry favours attackers, because generative AI makes their side of the race cheaper much faster than it makes the defender's side of the race better.

The way out of the race is not to win it. It is to depend less on the layer where the race happens. Document-driven evidence, multi-source corroboration, version-pinned records, and reusable verified identity across regulated firms are the structural choices that reduce the proportion of the verification chain that biometric arms race outcomes can compromise.

A US$15 fake ID is a serious problem if it is the first link in a chain that the rest of your verification depends on. It is a much smaller problem if it is the first link in a chain where every subsequent link is held by an independent source the fraud operation cannot cheaply defeat.

The compliance question of 2026 is no longer whether your deepfake detection is good enough. It is what fraction of your verification chain still depends on the layer that just got commoditised.

Want a structural review of where your onboarding stack concentrates trust, and how exposed that concentration is to current deepfake economics? Talk to our team.