Back to Blog
February 3, 2026

Your ISO 42001 Certification Won't Make Your AI System Compliant

What the EU AI Act Actually Requires

By Modulos9 min read
 Your ISO 42001 Certification Won't Make Your AI System Compliant

The EU AI Act regulates products, not organisations. That distinction changes everything about quality management.

Enterprises across Europe are pursuing ISO 42001 certification as their EU AI Act compliance strategy. It's a reasonable instinct — 42001 is the international standard for AI management systems, it's certifiable, and it demonstrates responsible AI governance.

There's one problem: it solves the wrong problem.

ISO 42001 certifies your organisation's AI management system. The EU AI Act regulates AI systems — products. These are fundamentally different objects of conformity, governed by different legal frameworks. One certifies that your house is well-managed. The other requires that each thing you build in that house meets specific safety requirements before anyone can use it.

prEN 18286, currently at CEN Enquiry stage, is the first quality management system standard built specifically to bridge that gap. It creates an organisational system whose sole mission is ensuring product-level regulatory conformity. Understanding why it exists — and why existing certifications can't substitute for it — is essential for any provider of high-risk AI systems approaching the August 2026 deadline.

Products, Not Organisations

The EU AI Act operates under the New Legislative Framework, the same product safety architecture that governs medical devices, machinery, radio equipment, and pressure vessels. This framework has been regulating physical products for decades. Its extension to AI systems carries specific structural consequences that most AI governance discussions overlook.

Under this framework, each AI system must demonstrate conformity with essential requirements before being placed on the market or put into service. Conformity assessment happens per AI system, not per organisation. The provider is responsible for each system meeting requirements at the moment of market placement. Technical documentation, risk management, and post-market monitoring are all system-specific obligations.

Article 17 of the EU AI Act mandates that providers of high-risk AI systems implement a quality management system. But the QMS exists to serve product conformity — it is not an end in itself. It's the organisational machinery that ensures each AI system you ship actually meets the essential requirements laid out in Articles 9 through 15.

This is why "we have ISO 42001" is a category error. ISO 42001 asks: does your organisation have responsible AI governance? The EU AI Act asks: does this specific AI system comply with the essential requirements? The first question is worthwhile. It does not answer the second.

Diagram illustrating ISO 42001 certification process for AI systems

What Article 17 Actually Demands

Article 17(1) lists thirteen elements the QMS must cover. Reading them grouped by function rather than alphabetically reveals that they are overwhelmingly product-oriented.

Product design and development requirements cover the design, design control, and design verification of each AI system, along with examination, testing, and validation procedures with defined frequency.

Product compliance strategy requires a documented regulatory compliance strategy including technical specifications, standards, or other solutions chosen to meet each essential requirement. This is a per-system compliance map, not an organisation-level policy statement.

Product data governance demands systems and procedures covering the full data pipeline: acquisition, collection, analysis, labelling, storage, filtration, mining, aggregation, retention, and any other operation performed before market placement.

Product risk management mandates the Article 9 risk management system, integrated into the QMS and applied to each AI system throughout its lifecycle.

Product lifecycle operations span post-market monitoring, serious incident reporting procedures, and resource management including supply chain oversight.

Product documentation requires technical documentation and record-keeping that demonstrate compliance to auditors, notified bodies, and competent authorities.

The remaining elements — communication with regulatory authorities, accountability frameworks with assigned roles, and change management — are organisational enablers. They exist to support product conformity. The QMS points outward toward each AI system, not inward toward itself.

prEN 18286 translates every one of these thirteen elements into concrete, auditable requirements. Annex ZA maps each clause of the standard directly to the corresponding paragraph of Article 17(1). This isn't interpretation — it's designed correspondence.

Presumption of Conformity: The Mechanism That Matters

Using a harmonized standard is not just good practice. It activates the most powerful legal mechanism in EU product safety law.

Once prEN 18286 is cited in the Official Journal of the European Union, compliance with its normative clauses triggers the presumption of conformity under Regulation 1025/2012. This flips the burden of proof: market surveillance authorities must demonstrate that your QMS is insufficient, rather than you having to prove it is sufficient. For any provider operating at scale across multiple AI systems and multiple jurisdictions, this is not a nice-to-have. It is the difference between a defensible compliance position and an open-ended regulatory negotiation.

ISO 42001 cannot provide this. It is an international standard — valuable, widely recognised — but it is not a harmonized European standard developed under a standardisation request from the European Commission. It carries no Annex ZA. It creates no presumption of conformity. It triggers no burden-flip.

The practical difference is stark. With prEN 18286 compliance, when an authority requests your QMS documentation — and under Article 17, they will — you point to a recognised framework with defined requirements. The conversation starts from a position of demonstrated conformity. Without it, you are constructing a bespoke argument for each element of Article 17, and the authority decides whether your interpretation holds.

The Hub of the Ecosystem

prEN 18286 does not operate in isolation. It sits at the centre of a network of harmonized standards, each addressing a specific essential requirement of the EU AI Act.

prEN 18228 covers the risk management system required by Article 9. prEN 18284 addresses data quality and governance under Article 10. prEN 18229-1 handles transparency, logging, and human oversight under Articles 12 through 14. prEN 18229-2 specifies accuracy and robustness testing under Article 15. prEN 18282 provides cybersecurity specifications, also under Article 15.

The QMS standard is the connective tissue. Its Clause 4.4 requires providers to select which standards or specifications they will use for each essential requirement and document that choice. Clause 8.1 mandates integrating the risk management system. Clause 8.5 connects to data governance. The QMS makes the standards ecosystem operational rather than theoretical — it turns a collection of technical specifications into a functioning compliance system.

ISO 42001 has a place in this picture. It can inform organisational governance practices and internal policies. But it is one input to the system, not the system itself. Annex D of prEN 18286 maps the correspondence between the two standards explicitly, and the gaps are visible: ISO 42001 has no equivalent for the regulatory compliance strategy, post-market monitoring, serious incident reporting, supply chain compliance requirements, or change management procedures that prEN 18286 specifies.

Five Requirements Your Current QMS Doesn't Have

Whatever QMS you currently operate — ISO 9001, ISO 42001, or something bespoke — prEN 18286 introduces requirements that almost certainly don't exist in your current system. Five stand out for their enterprise impact.

Regulatory compliance strategy with documented approach selection. Clause 4.4 requires you to document, for each AI system, which essential requirements apply, which standards or specifications you are using to meet each one, and — critically — to justify any gaps with objective evidence. If you rely on approaches beyond harmonized standards or common specifications, you must document and demonstrate that each essential requirement is met. This is a per-system compliance architecture, not a blanket policy.

Pre-determined change management for continuous learning. Clause 9.3.4 addresses what happens when your AI system updates through continuous learning or scheduled retraining. Those changes need to be anticipated, documented, verified, and validated as part of the initial conformity assessment. The technical documentation must describe expected performance changes, version identification, impact assessment, and cumulative effects of sequential updates. This is the standard's answer to "my model retrains weekly" — and it's more permissive than many expected, provided you plan for it upfront.

Serious incident reporting with codified timelines. Clause 9.5 implements the reporting requirements directly from the Act. Critical infrastructure disruption: report within two days. Death of a person: ten days. All other serious incidents: fifteen days. Provisional reports are permitted, but the clock starts when you establish a causal or "reasonably plausible" link between the AI system and the incident. You need documented procedures covering deployer-to-provider reporting, internal escalation, root cause investigation, and resource allocation for authority inquiries.

Supply chain compliance that follows the components. Clause 9.2 requires that every external supplier of components, data, training services, or test procedures be evaluated against documented criteria, selected, monitored, and periodically re-evaluated. Their outputs must conform to your QMS requirements. The provider cannot outsource and forget — responsibility stays with you regardless of where the work happens. This includes the foundation model you are building on top of, the annotation services you contracted, and the test datasets you purchased.

Fundamental rights as a verification concern. This is not a policy checkbox. Annex A provides a structured consultation framework requiring engagement with affected persons starting at the inception stage, continuing through design and development, and extending into validation. Verification activities for fundamental rights hazards can include stakeholder consultation, real-world conditions testing, cross-functional expert review, and consultation with national bodies that enforce fundamental rights protections. The standard expects you to identify who might be harmed and ask them — or credible proxies — before you ship.

What to Do Before the Standard Is Finalised

prEN 18286 is at CEN Enquiry stage. It will evolve before final publication. But the Article 17 requirements it implements are already law, and the August 2026 deadline for Annex III high-risk systems is fixed. Waiting for the standard to be finalised before starting work means running out of time.

Start with a gap analysis. Map your existing QMS processes against Article 17(1)(a) through (m). Be honest about what is genuinely covered versus what is only superficially adjacent. An ISO 9001 document control procedure is not a post-market monitoring system. An ISO 42001 risk assessment is not an Article 9 risk management system applied to a specific AI product.

Identify which AI systems in your portfolio trigger high-risk classification under Annex III. For each one, document the applicable essential requirements and your chosen compliance approach. This mapping is the nucleus of what prEN 18286 Clause 4.4 will demand.

Assess your supply chain exposure. Which components, models, data, and services are externally supplied? Do you have evaluation criteria for those suppliers? Can you demonstrate ongoing monitoring? If not, start building that capability now.

Establish serious incident reporting procedures immediately. The timelines are already in the Act. You do not need a standard to tell you to prepare for a two-day reporting window on critical infrastructure incidents.

The standard will give you the presumption of conformity. The underlying regulatory obligations do not wait for it.

Need help assessing your QMS readiness for the EU AI Act? Talk to us.

Ready to Transform Your AI Governance?

Discover how Modulos can help your organization build compliant and trustworthy AI systems.