AI agents in production: automating AML screening for a FinTech

By Ahmed "Riz" Ratul · 2026-03-24 07:48:47 · AI, Agents, FinTech, Compliance

How we built AI agents that screen against 1.7M+ sanctions records in seconds. Not a demo — it's processing real compliance cases.

HubSecure is a multi-tenant FinTech compliance platform I co-built for BCV Group. The hardest problem wasn't the blockchain KYC or the multi-tenant architecture — it was making AML screening fast enough to be useful.

The manual process

Before AI: a compliance officer receives a new customer application. They manually search the customer's name (and name variations) against sanctions lists, PEP databases, and adverse media. They cross-reference addresses, dates of birth, and known associates.

Average time: 4 hours per customer. For a platform processing hundreds of applications per month, that's 5+ full-time compliance officers doing nothing but name-matching.

What we built

An AI agent pipeline that:

1. Ingests the application — name, DOB, nationality, address, known associates

2. Generates name variations — transliterations, common misspellings, alias patterns (this is where LLMs shine — they understand that "Mohammed" has 30+ valid Latin spellings)

3. Screens against 1.7M+ records — OFAC SDN, EU sanctions, UN consolidated list, PEP databases, adverse media

4. Scores risk — weighted scoring based on match confidence, record severity, and jurisdiction

5. Generates a structured report — the compliance officer gets a pre-filled assessment with evidence links, not a raw data dump

Architecture decisions

Why not traditional fuzzy matching? We tried it first. Levenshtein distance and Soundex catch simple typos but miss transliteration variants entirely. "Abdulrahman" vs "Abd al-Rahman" vs "Abdul Rahman" — these are the same person, and traditional fuzzy matching scores them as different names.

An LLM with the right prompt and a few-shot examples handles this effortlessly. It understands naming conventions across Arabic, Cyrillic, Chinese, and Thai transliteration systems.

Why agents, not a single LLM call? The screening has distinct phases that benefit from different models and retry strategies. Name variation generation uses a fast model (Haiku). Sanctions matching is deterministic (database lookup). Risk scoring uses a reasoning model (Sonnet). Each step can fail and retry independently. Why not a third-party screening API? Cost and control. Enterprise AML screening APIs charge $2-5 per check. At volume, that's significant. More importantly, the client needs to customize scoring weights per jurisdiction and add custom screening rules. A third-party API is a black box.

Results

  • <strong>Screening time:</strong> 4 hours → 45 seconds
  • <strong>False positive rate:</strong> Lower than manual screening (the AI is better at name transliteration than most humans)
  • <strong>Cost per screening:</strong> ~$0.03 (LLM API calls + compute)
  • <strong>Human review:</strong> Still required for high-risk flags, but the AI does 90% of the work
  • The lesson for other products

    If your team has humans doing repetitive pattern-matching work — screening, categorization, triage, data extraction — that's not "too complex for AI." That's exactly where AI agents earn their place.

    The key is building it as an augmentation layer, not a replacement. The compliance officer still makes the final call. The AI just eliminated 3 hours and 55 minutes of mechanical work.