Back to Blog
March 18, 2026

We Train the Regulators. Here's What Financial Institutions Should Know

By Modulos6 min read
We Train the Regulators. Here's What Financial Institutions Should Know

On March 10th, our CEO Kevin Schawinski delivered a training session to EU financial supervisors at the EU Supervisory Digital Finance Academy on AI governance and risk-based supervision.

Not a panel. Not a keynote. A working session on how to operationalise AI governance — and how to tell the difference between institutions that are doing it properly and those that are performing compliance.

We want to share some of what was covered, because if you're running AI in a financial institution, these are the questions your supervisors are now being trained to ask.

1773820869625-DSC09957_1-thumb.webp

The distinction most institutions get wrong

The single most common mistake we see: treating ISO 42001 certification as an EU AI Act compliance strategy.

It isn't. ISO 42001 certifies your management system. The EU AI Act regulates your product. The Act operates under the New Legislative Framework — the same product safety architecture as medical devices. Calling ISO 42001 your AI Act strategy is a category error.

The standard that actually bridges this gap is prEN 18286 — the first quality management standard built for product-level AI Act conformity. Once harmonised, it triggers presumption of conformity, flipping the burden of proof. If your compliance team hasn't heard of it yet, they need to.

The risk pyramid is wrong

Every presentation on the EU AI Act uses the four-tier risk pyramid. Minimal, limited, high, unacceptable. Simple, clean, wrong.

1769782721796-four-gates.svg

The Act doesn't sort systems into mutually exclusive tiers. It runs four independent compliance checks — what Kevin calls the Four Gates — and the obligations stack. A credit-scoring chatbot triggers Gate 2 (high-risk) AND Gate 3 (transparency) simultaneously. The term "limited risk" doesn't even appear as a classification category in the legislative text.

If your institution is classifying AI systems using the pyramid, you're missing stacked obligations. So are supervisors who use it.

If you can't put a € number on a risk, you can't manage it

The training included a risk taxonomy covering five categories — governance, technical, ethical, legal, and operational — each with a description, threat vectors, quantified economic impact, and mitigation controls.

The reaction to the last column was the strongest. Most institutions assess AI risk with qualitative matrices — red, amber, green. That tells you nothing actionable. It doesn't inform investment decisions. It doesn't support insurance conversations. It doesn't help a board decide whether to fund a mitigation or accept the exposure.

In the Modulos platform, every risk gets a € number. Not because precision is possible in every case, but because the discipline of quantification forces honest reasoning. Two outcomes, both wins: the risk-adjusted ROI is negative and you kill the project early on paper (cheap), or you de-risk and proceed with eyes open. The only losing move is proceeding without quantification.

The insurance signal your risk team should be watching

This point generated the biggest reaction in the room.

Major insurers — AIG, Great American, WR Berkley — are seeking to exclude AI-related liabilities from corporate policies. WR Berkley's language covers "any actual or alleged use" of AI. That's not a minor carve-out.

The liability chain is now broken: AI developers disclaim liability in their terms of service. Deployers can no longer transfer risk to insurers. The liability sits on the deployer's balance sheet — your balance sheet.

This matters more than regulation because insurance exclusions take effect at next policy renewal. No transition period. No jurisdictional variation. Governance frameworks will become prerequisites for AI coverage, the same pattern the industry saw with cyber insurance a decade ago.

Has your institution checked its insurance coverage for AI liability recently?

Agents break every assumption your governance framework makes

The session walked supervisors through four assumptions that every existing governance framework — the EU AI Act, NIST AI RMF, ISO 42001 — makes about AI systems. Agents violate all four:

  1. Organisations know what AI they have. Reality: employees deploy agents in hours without approval.
  2. Organisations control AI capabilities. Reality: agents acquire new functions via plugins. A coding assistant picks up credit decisioning capability from a plugin install.
  3. AI systems operate in isolation. Reality: agent networks exist. Failures propagate across organisational boundaries at machine speed.
  4. Human oversight is meaningful. Reality: 50 autonomous actions per minute. Human-in-the-loop is a legal fiction without architectural design.

Kevin assessed the OpenClaw agent framework against OWASP controls earlier this year. Every single control was marked "Not Executed." That is the state of agent governance today.

The eight things supervisors are now trained to demand

The session concluded with a checklist. Your institution should be able to produce all eight:

  1. A complete AI inventory with risk classification for every system
  2. Model and system cards documenting what each system does, how it was trained, and known limitations
  3. Data sheets with training data provenance and bias assessments at each lifecycle stage
  4. Fairness analysis with a documented choice of metric, rationale, and trade-offs
  5. Risk assessments quantified by economic impact, not qualitative ratings
  6. Continuous monitoring with live performance metrics and drift detection
  7. Incident response procedures with tested escalation paths
  8. Agent decision logs with full action chains, timestamps, and inputs/outputs at each step

Most institutions cannot produce most of this today. That's the gap — and that's the opportunity. The ones that close it first will have a fundamentally different conversation with their supervisors.

Why this matters now

There's a power asymmetry building. If an institution has a real-time risk quantification dashboard and the supervisor reviews a static PDF six months later, the information gap is obvious. But supervisors are closing that gap. This training is part of that effort.

The institutions that build operational AI governance infrastructure now — not performative compliance, not PDF reports gathering dust, but live, quantified, auditable governance — will be the ones that innovate fastest and navigate supervision most smoothly.

The technology exists. Supervisory expectations are rising to match. The question is whether you move before or after your regulator starts asking questions you can't answer.


Kevin Schawinski is CEO and co-founder of Modulos, Europe's first ISO 42001-certified AI governance platform. He is a member of the NIST AI Safety Institute Consortium (WG1, WG5), contributes to the EU AI Office GenAI Code of Practice, and advises the Swiss Federal Council on AI. Before founding Modulos, he was a professor at ETH Zurich with a career spanning Oxford, Yale, and NASA, where he published over 200 peer-reviewed articles including six in Nature and Science.

To access the full deck from the EU Supervisory Digital Finance Academy session, click here.

To see what operational AI governance looks like for financial institutions, visit modulos.ai