Synthetic Identity Fraud: Why Manual KYC Cannot Catch What It Was Not Designed to See
AI-generated synthetic identities are designed to defeat exactly the manual checks most firms still rely on. This article examines what the 2026 SmartSearch Compliance Report reveals about the gap between awareness and action, and what an effective response actually looks like.

There is a particular kind of compliance failure that does not look like a failure at the time it happens. Every check is performed. Every document is reviewed. Every box is ticked. The customer file looks complete. Months or years later, a regulator or law enforcement agency reveals that the customer was never a real person at all. The identity was synthetic. The documents were fabricated to pass exactly the checks the firm was running.
This is the threat the 2026 SmartSearch Compliance Report puts at the centre of its findings, and it deserves more attention than the headline numbers it produced.
What the Data Actually Says
The most quoted figure from the report is the £33.9 billion that UK regulated firms collectively spend on compliance, with an estimated 36% of that on processes that could be automated. That number captures attention, but it understates the more uncomfortable finding underneath it.
The report, based on a Censuswide survey of 1,000 senior decision-makers across UK finance, property, legal, and accountancy firms, found that 54% of identity verification checks are still performed manually. The proportion is broadly consistent across sectors: finance and property at 55%, legal at 54%, accountancy at 52%. Eighty-seven percent of the same firms acknowledge that up to half of their manual, repetitive compliance work could already be automated with existing technology. Only 30% are using AI for sanctions screening, and fewer than 40% use or plan to use it for enhanced transaction monitoring.
The framing in the report is that this is an inefficiency problem. It is not. It is a coverage problem.
The Threat Has Changed Shape
Synthetic identity fraud is not new, but the cost of producing it has collapsed. A synthetic identity is a person who does not exist, constructed from a combination of real and fabricated data. Unlike traditional identity theft, there is no victim to call the bank or notice unusual activity on a credit file. The customer simply lives inside the firm's systems as a complete and consistent record.
What changed in the past 24 months is the production model. AI tools allow fraudsters to generate plausible identity documents, supporting evidence, beneficial ownership chains, and source-of-funds narratives at a scale that no manual process was designed to absorb. Firms are no longer evaluating the occasional fabrication slipped into a queue of legitimate applications. They are filtering against an industrial output of statistically valid forgeries optimised against the exact checks the firm performs.
The CEO of SmartSearch put the asymmetry plainly in the report: a manual checklist is not a meaningful defence against fraud generated at machine speed and scale. The image is uncomfortable because it is accurate.
Why a 54% Manual Rate Is Now a Risk Indicator
A manual identity check is built around a particular assumption: that an experienced reviewer can detect inconsistency between a person and the documents they present. Photographs that do not match. Signatures that vary. Dates that do not add up. Stories that do not line up with documentation.
Synthetic identities are designed to remove all of those signals. They are internally consistent by construction. The passport matches the address. The address matches the utility bill. The corporate structure resolves cleanly. The source of funds narrative is plausible for the jurisdiction and the customer profile. The reviewer has nothing to react to, because the inconsistencies the reviewer was trained to catch have been engineered out before the file ever reached them.
This is why the 54% figure matters. It is not that manual review is slow. It is that manual review is increasingly looking at the wrong layer. The fraud is no longer in the surface inconsistency between a person and their paperwork. It is in the patterns across many applications, in the corroboration of documents against independent sources, and in the alignment between what a customer states and what the wider data record actually shows.
A human reviewer cannot run that comparison at the speed and scale it needs to be run. The 87% of firms who say half of this work could already be automated are, in effect, telling their own regulators where the gap is.
The Supervisory View
The SmartSearch data is UK-specific, but the supervisory pattern across the EU and the markets that orbit it is converging on the same point. Recent enforcement by the CSSF in Luxembourg has focused not on firms missing obvious red flags, but on firms failing in the structural work that sits behind individual checks. The €185,000 fine against Rakuten Europe Bank in January 2026 turned on incomplete verification of beneficial ownership for corporate clients and transaction monitoring that flagged activity without adequate follow-up. The €283,000 fine against AllianzGI Luxembourg identified a risk analysis that omitted more than a thousand natural-person investors directly invested in the funds the branch managed. The €785,000 penalty against Fuchs & Associés Finance turned on doubts about the true identity of beneficial owners that the firm had not attempted to resolve before continuing the relationship.
Mauritius has moved in the same direction. The Financial Intelligence and Anti-Money Laundering (Administrative Penalties) Regulations came into effect in November 2025, formalising direct administrative fines, and the FSC revoked or suspended licences across more than thirty entities in early 2025 alone, with beneficial ownership and customer due diligence failures consistently among the cited causes. The Mauritius AML/CFT/CPF Bill 2026, currently progressing, expands FIU powers and tightens beneficial ownership reform further.
The pattern across these regimes is consistent. Regulators are no longer asking whether the firm performed a check. They are asking whether the firm had the structural capacity to detect the risk it was exposed to. That is exactly the question synthetic identity fraud forces. A firm performing 54% of identity checks manually is not necessarily breaking any specific rule today. But it is operating a defence designed for a threat that has moved on.
The same logic shapes how AMLA is preparing for direct supervision from 2028. The published methodology for selecting the 40 firms it will supervise directly does not reward firms that produce more alerts. It rewards firms that produce defensible reasoning for the alerts they raise and dismiss. A high false positive rate is no longer just an operational cost. It is a supervisory signal that the screening function is not doing what it claims to do.
What an Effective Response Looks Like
The practical response is not to replace human reviewers with a model and call it done. It is to redesign the verification stack so that the work machines can do reliably is done by machines, and human attention is applied to the cases that benefit from judgment.
Three structural changes follow from this.
Cross-source corroboration. Identity documents should be verified against independent registries, sanctions and PEP data, and historical activity, not just inspected for surface validity. A document that passes a visual check but cannot be corroborated against an independent record is not yet verified. It is a claim awaiting evidence.
Document-level evidence rather than self-reported information. Risk decisions should be based on documents the firm has obtained and verified, not on the customer's account of themselves. This is the foundation of the document-driven approach: every risk decision should trace back to evidence the firm can produce in an audit. It is what regulators are increasingly testing for, and it is also what makes synthetic identities visible. They are designed to defeat shallow checks. They struggle against deeper ones run consistently across the customer base.
Auditable reasoning. Every accepted customer, every dismissed alert, and every escalation should generate a record that explains why the decision was made and what evidence supported it. This is no longer a documentation preference. AMLA's published selection methodology evaluates the reasoning trail behind compliance decisions, not just the outcome. A firm that cannot show its work is not in a stronger position than a firm that did the work and recorded it.
Closing
The compliance question of the past decade was how to do verification work faster. The compliance question of the next decade is what verification work the firm is currently unable to do at all. The 2026 SmartSearch numbers are useful not because they prove that automation saves money, but because they expose how much of the current defensive perimeter is built on assumptions about fraud that no longer hold.
A compliance programme that depends on a human reviewer noticing inconsistency in an internally consistent identity is a programme operating on borrowed time. The firms that move first to redesign that perimeter, around document-driven evidence, cross-source corroboration, and auditable reasoning, are not just lowering their compliance cost. They are closing a gap that synthetic identity fraud was specifically built to exploit.
Have questions about how synthetic identity fraud affects your verification stack, or want a review of where manual checks may be leaving gaps? Talk to our team.