AI Agents Are Coming to KYC. Here's What They Will Need from the Platforms They Run On.
Anthropic's 5 May 2026 announcement of ten agent templates for financial services, including a KYC screener and a partnership with FIS on AML investigation agents, marks the moment AI agents become a real part of the compliance stack. The article argues that the firms that benefit will not be the ones that deploy agents fastest, but the ones whose data infrastructure is ready to make agentic decisions defensible.

On 5 May 2026, Anthropic announced ten ready-to-run agent templates for financial services, including a KYC screener that "assembles entity files, reviews source documents, and packages escalations for compliance review." In the same announcement, FIS, the largest payment processor in the world, committed publicly to building "an agent that compresses AML investigations from days to minutes" on Anthropic's stack.
For compliance officers at management companies and TCSPs, this is the news of the week. It is also the moment at which the question "should AI be in our compliance stack" becomes a question that is no longer worth asking. The question that matters now is what the data layer underneath the agent has to look like for the agent to actually work.
This is not a new question for us. It is the question Fidify was built around. The arrival of credible, well-engineered KYC agents from a serious AI lab makes the data layer question urgent for everyone else.
What the Anthropic Announcement Actually Says
The ten agent templates cover work across research and client coverage, finance and operations, and middle-office compliance. The KYC screener template is explicit. It is designed to assemble entity files, review source documents, and package escalations for human compliance review. Anthropic ships it as a plugin in Claude Cowork and Claude Code, and as a cookbook for the Claude Managed Agents platform, which means firms can run the same template either alongside an analyst or as a long-running autonomous workflow with full audit logging.
The FIS partnership is the more significant signal. FIS sits at the centre of how money moves for thousands of financial institutions worldwide. Their CEO's statement that an AML investigation agent is being built jointly with Anthropic, with credit decisioning, fraud prevention and deposit retention agents to follow, tells the market that the institutional buyers of compliance infrastructure are now seriously committing to AI agents in regulated workflows.
This is welcome. AI agents are well-suited to a substantial portion of KYC work that has historically consumed analyst time without producing analyst-quality decisions: chasing missing documents, checking completeness, cross-referencing identities against registries, formatting escalation packages. That work should be done by agents. Compliance officers should be doing the work that requires judgment.
The difficulty is what the agent acts on.
What an Agent Needs to Be Defensible Under AMLA
An agent that screens KYC files is only as good as the files it screens. This is not a comment about Anthropic's engineering. It is a structural property of any AI system. The agent reads what is in front of it, applies its instructions, and produces an output. If what is in front of it is a fragmented document base, a self-reported customer narrative, or a beneficial ownership chain reconstructed from inconsistent source records, the agent's output will reflect that fragmentation.
Under the AMLR's selection methodology for AMLA's direct supervision in 2028, this matters in three specific ways.
The audit trail problem. AMLA's published methodology evaluates the reasoning trail behind compliance decisions, not just the outcome. An agent that escalates a file in 30 seconds is impressive. An agent that can produce a complete record of the documents it examined, the sources it corroborated against, the policy clauses it applied, and the reasoning it followed to the escalation decision is defensible. The first is a productivity feature. The second is an audit position. The platform underneath has to be capable of producing the second, not just the first.
The document-versioned evidence problem. A compliance decision made today must be reproducible against the documents that existed today, not against whatever the document base looks like next year. If an agent reviews a passport and accepts it, the system needs to record exactly which version of which document it reviewed, when, and against which policy. Document-driven KYC is not a stylistic preference. It is the substrate that makes agentic decisions auditable.
The cross-portal identity problem. Most external users in our customer base interact with multiple regulated firms. If the same individual is verified separately by every firm they connect to, the agent at each firm operates on a partial view of the customer. If the verified identity is reusable across firms with cryptographic guarantees, the agent at every firm operates on the same canonical record. The architecture decision compounds. A platform that fragments identity creates work the agent cannot undo.
These are not theoretical concerns. They are exactly the concerns that distinguish a KYC agent that performs in a demo from a KYC agent that holds up under a CSSF inspection.
How Fidify Already Approaches This
We have been building toward this moment since the platform's first design decisions.
A dedicated AI agent runs per client, not as a shared model. Each client environment in Fidify includes its own AI agent service that operates against that client's own data, under that client's own policies, with isolation enforced at the namespace level. When the agent makes a decision for one management company, it is not influenced by the document base of another. This is a design decision we made before agentic AI was the dominant frame in the market, and it is the structural property that lets each firm run agents against their own configured policies without cross-contamination.
The UBO assessment workflow is already an agentic pipeline. Beneficial ownership identification in Fidify runs as a multi-phase agent workflow: organisational chart construction, ownership analysis, control analysis, senior managing official designation, AI report generation, human assessment review, and approval. The agent works through the structure the way a human analyst would, but with consistent application of the firm's policy across every assessment. The output is not a UBO list; it is a structured assessment with the evidence for every conclusion auto-populated against the source documents.
Risk assessment runs against a configurable policy engine. The agent does not impose a regulatory framework on the firm. The firm configures its own framework, expressed across four sub-policy categories covering settings, keywords and search, the knowledge base the agent reads, and the analysis instructions the agent follows. The agent is fluent enough in compliance domain terminology to execute any well-formed framework correctly. This is what makes the same platform deployable in Luxembourg under CSSF, in Mauritius under FSC, and in any other jurisdiction without rewriting the agent.
End-to-end encryption with versioned documents underneath. The document base is encrypted client-side with per-document keys, and every document is referenced by version. When the agent reads a document and makes a decision, the decision is anchored to the exact version it read. The same document updated next year creates a new version, a new agent decision, and a new audit record. There is no ambiguity about what the agent saw.
A single cryptographic identity per external user, reusable across firms. The user's verified identity is held centrally, encrypted, and shared into each connected firm's environment under user control. When an agent at Firm A and an agent at Firm B both look at the same individual, they look at the same canonical, verified record. Neither firm sees the other's documents. Both firms benefit from the corroboration the verified identity provides.
None of this was built in response to the May 5 announcement. It was built because the structural problems an agent faces in compliance were visible from the architecture-design phase. The announcement makes those problems urgent for the rest of the market.
What This Means for Firms Evaluating Tooling
If you are a compliance officer at a management company or a TCSP, the practical implication of the next 18 months of AI in this space is not that you need to evaluate AI tools. It is that you need to evaluate what your data layer looks like underneath any AI tool.
The right questions to ask of any KYC platform you consider, including ours, are the structural ones.
Is every document in the system version-pinned, so a decision made today can be reproduced against the exact evidence that decision was made on. Is the audit trail capable of producing not just what was decided, but what was read, what sources were corroborated, and what policy clauses were applied. Is the customer's identity stored once and shared, or duplicated per firm. Is the policy engine configurable to the firm's own framework, or does the agent impose its own. Is the agent isolated to the firm's own environment, or shared across customers in ways that create data flow questions.
These questions matter regardless of which AI lab the agents come from. They will matter more next year than they do today, because the AMLA selection process in 2028 will reward firms that can demonstrate defensible reasoning behind every decision, not firms that processed more files faster.
Closing
The May 5 announcement is good news. It signals that AI agents in compliance are being taken seriously by serious firms, and that the productivity gains from automating the genuinely repetitive parts of KYC are about to become broadly available. We expect more announcements like this through 2026 and into 2027.
The firms that benefit will not be the ones that deploy the agents fastest. They will be the ones whose data infrastructure is ready for the agents to act on. Document-versioned evidence, per-client isolation, configurable policy, reusable verified identity, and complete audit trails are not features that can be retrofitted under deadline pressure. They are platform decisions that take years to build correctly.
The agents are arriving. The question for compliance officers is whether the platform underneath them is ready to make their decisions defensible.
Want to see how Fidify's per-client AI agent works against a structured CDD data model, or get a walkthrough of how the platform produces audit-ready reasoning trails for every decision? Talk to our team.