Watermarking & Synthetic Content
2 December 2026
Article 50 transparency. Provisional 3-month grace under AI Omnibus (pending OJ publication).
---
DAYS
-
MONTHS
General Application
2 August 2026
Article 101 GPAI fines start. Annex III high-risk would shift to 2 December 2027 under the AI Omnibus provisional agreement.
---
DAYS
-
MONTHS
High-Risk Annex III Systems
2 December 2027
Provisional date under AI Omnibus (pending formal adoption and OJ publication). Annex I product-route would shift to 2 August 2028.
---
DAYS
-
MONTHS

EU AI Act Compliance,
End to End

Everything you need to know about EU AI Act compliance: obligations, the four-gates risk model, conformity assessments, penalties, and how Modulos accelerates the process. Updated as the Digital Omnibus develops.

Timeline and Compliance Milestones

The EU AI Act entered into force on 1 August 2024 and applies in stages. The AI Omnibus deal agreed by Council and Parliament on 7 May 2026 would push the main high-risk dates back: stand-alone high-risk AI to 2 December 2027 and product-integrated high-risk AI to 2 August 2028. Pending formal adoption and Official Journal publication; treat the agreed dates as the operative planning baseline.

August 2024

The Act enters into force

You are here
2
February 2025

Banned AI practices and AI literacy obligations start

3
August 2025

Foundation-model (GPAI) rules start

4
February 2026

Commission post-market monitoring rules

5
August 2026

General application date

Omnibus deal dates
6
December 2026

Synthetic-content labelling obligations

7
August 2027

National AI regulatory sandboxes deadline

8
December 2027

High-risk AI in regulated services

9
August 2028

High-risk AI in regulated products

10
By End of 2030

High-risk AI in EU IT systems

What the Omnibus Changes

The Digital Omnibus is not just a delay. It reshapes timelines, tightens some rules, and simplifies others. Here is what is moving, what is staying, and what you should do about it.

Dates would shift

Annex III stand-alone high-risk to 2 December 2027; Annex I product-integrated high-risk to 2 August 2028.

New Article 5 prohibition

AI systems intended to generate non-consensual sexual or intimate imagery, and CSAM. Systems lacking effective technical safeguards are also caught.

Watermarking grace cut

Article 50 transparency: 3-month transitional grace from general application; deadline 2 December 2026.

Status: provisional

Pending Council and Parliament endorsement, legal-linguistic revision, and Official Journal publication "in the coming weeks".

How Compliance Actually Works

The EU AI Act doesn't sort AI systems into tidy risk tiers. It runs four independent checks, and the obligations stack. A single AI system can trigger multiple gates simultaneously.

Most guides get this wrong. Here's how compliance actually works.

1
GATE 1 · Article 5

Prohibited Practices

Does this AI practice cross a red line?

2
GATE 2 · Article 6 / Annex I, III

High-Risk Systems

Is this AI used in a high-stakes domain?

3
GATE 3 · Article 50

Transparency

Does this AI interact with people, detect emotions, or generate synthetic media?

4
GATE 4 · Chapter V

General-Purpose AI

Are you providing a foundation model or GPAI?

Obligations stack: one system can trigger multiple gates

Examples

Credit Scoring Chatbot
1
2
3
4

High-risk (essential services) + Transparency (human interaction)

Customer Service Bot
1
2
3
4

Transparency only: disclose it's AI

Medical Triage System
1
2
3
4

High-risk under Annex III + Article 50 if it interacts with patients directly. GPAI obligations stay with the underlying model provider.

The EU AI Act Follows Your AI

The EU AI Act has extraterritorial reach. Article 2 brings non-EU providers and deployers into scope when an AI system is placed on the EU market or when its outputs are used in the Union.

Loading map...

A non-EU provider builds an AI credit-scoring model. A US fintech licenses it. EU customers receive credit decisions from it. Article 2 brings both the non-EU provider and the non-EU deployer into scope here, because the system is placed on the EU market and its outputs are used in the Union.

Most companies misclassify themselves under the EU AI Act

Provider, deployer, importer, distributor: four of the commercial roles defined in the EU AI Act. The trap isn't understanding what they mean. It's recognising which one you actually are. Most companies guess wrong, and the difference can be six figures of compliance work.

1
Article 25(1)(b)–(c)You become a Provider

You become a provider when you turn a system into a high-risk one.

Fine-tune or integrate a foundation model into an Annex III use case. If the result qualifies as high-risk under Article 6, Article 25(1)(c) reclassifies you from deployer to provider. Substantially modify an already-high-risk system and Article 25(1)(b) does the same. Provider obligations are heavy: technical documentation, conformity assessment, the Articles 9 to 15 stack.

2
Article 25(1)(a)You become a Provider

You become a provider when you rebrand a high-risk system.

White-label a third-party high-risk AI system and put your name or trademark on it, and Article 25(1)(a) says you are now the provider. The original developer’s documentation does not transfer to you. You inherit all provider obligations, even if the system was built by someone else.

3
Annex III, Article 50You are a Deployer

You become a deployer the moment you use AI on EU persons.

Internal HR screening tool that touches EU candidates? You are a deployer of a high-risk system under Annex III. Customer service chatbot interacting with EU customers? Deployer with Article 50 transparency obligations on top. Credit-scoring model used inside a fintech’s product? Deployer of high-risk AI under Annex III. “We just bought it from a vendor” is not a defence.

4
Articles 23, 24Importer or Distributor

You become an importer or distributor without realising.

Reselling a non-EU high-risk AI system into the EU market? Importer obligations under Article 23, including verifying the provider has done the conformity assessment. Acting as a SaaS reseller or marketplace in a value chain for high-risk AI? Probably distributor obligations under Article 24. The high-risk qualifier matters: these duties attach specifically to high-risk systems, not to AI in general.

Misclassification is the most expensive mistake on this page. The default assumption that “we just use AI” rarely survives contact with the Act’s definitions. Get the role right before you scope the program: every other obligation flows from it.

Run the role classification

Compliance Requirements

The Act lays out a range of requirements for high-risk AI systems. Tap any card for a plain-language summary and a link to the article.

* Required for public-law deployers and private entities providing public services, plus deployers of certain Annex III high-risk systems (creditworthiness assessment under 5(b) and life/health insurance risk assessment under 5(c)). Annex III point 2 (critical infrastructure) is excluded from the FRIA trigger.

Conformity Assessments

High-risk AI systems must undergo Conformity Assessments to demonstrate compliance before market entry. This structured process ensures your AI systems meet regulatory requirements.

Step 1 - A high-risk AI system is developed

Establish, implement, document, and maintain a risk management system to address the risks posed by a high-risk AI system.

Step 2 - The system undergoes the conformity assessment and complies with AI requirements

- Implement effective data governance, including bias mitigation, training, validation, and testing of data sets.

- Maintain up-to-date technical documentation in a clear and comprehensive manner.

Step 3 - Registration of certain high-risk systems in the EU database under Article 49 (primarily Annex III).

- Ensure that high-risk AI systems allow for the automatic recording of events (logs) over their lifetime.

- Design systems to ensure sufficient transparency for deployers to interpret outputs and use appropriately.

Step 4 - A declaration of conformity is signed, and the AI system must bear the required CE marking under Article 48

- Develop systems to maintain an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle.

- Ensure proper human oversight during the period the system is in use.

CE Mark

The system can be placed on the market.

Once substantial changes happen in the AI system's lifecycle, repeat from Step 2.

System placed on market

Disclaimer: The steps outlined above are intended to provide a general overview of the conformity assessment process. They should not be considered exhaustive and are not intended as legal or technical advice.

Penalties for Non-Compliance

The EU AI Act imposes significant fines for non-compliance, calculated as a percentage of the offending company's global annual turnover or a predetermined amount. For most undertakings the higher of the two applies; SMEs and start-ups face lower caps under Article 99(6). The 7 May 2026 AI Omnibus provisional agreement extends certain SME regulatory exemptions to small mid-caps (up to 750 employees or €150M turnover); the exact scope of the simplifications transferred to small mid-caps will be set by the Official Journal text and is being verified.

Ensure your AI systems comply with the EU AI Act to avoid these penalties.

Request a Demo

Penalty Breakdown

Non-compliance with prohibitions

Up to
€35M
or 7% of turnover

Supplying incorrect, incomplete, or misleading information

Up to
€7.5M
or 1% of turnover

Non-compliance with other obligations

Up to
€15M
or 3% of turnover

How Modulos accelerates EU AI Act compliance

The Modulos AI Governance Platform addresses each EU AI Act obligation with purpose-built tools.

Risk Management
Quantitative risk assessment with Monte Carlo simulation
Documentation & Records
AI Agents auto-generate and find evidence in your repos
Human Oversight & QMS
Built-in review workflows with full audit trail
Multi-Framework Compliance
140+ controls mapped to EU AI Act, ISO 42001, NIST AI RMF

Why Modulos for the EU AI Act

Three credentials specific to the EU AI Act, ordered by weight. The links point to the public bodies the work was done with.

CEN-CENELEC JTC 21

Highest weight

Contributed to the European AI standards that grant presumption of conformity

In practice, EU AI Act compliance for high-risk systems will run through CEN-CENELEC harmonised standards. Once their references are cited in the Official Journal under Article 40, harmonised standards developed through Joint Technical Committee 21 (JTC 21) grant a legal presumption of conformity for the Section 2 high-risk requirements they cover (Articles 9 to 15). Standards relevant to the Article 17 quality management system support compliance evidence rather than create presumption directly. Modulos contributed to the development of these European AI standards through participation in CEN-CENELEC JTC 21 working groups.

JTC 21 (CEN-CENELEC)

AESIA, Spanish AI regulatory sandbox

First EU AI Act sandbox

Supported the first EU AI Act regulatory sandbox

Spain established the first EU AI Act regulatory sandbox under Royal Decree 817/2023, operated by AESIA, Europe’s first dedicated AI supervisory agency. Twelve Spanish companies tested high-risk AI systems against the AI Act in a controlled environment. The sandbox produced 16 official guidelines now published by AESIA, the first structured set of interpretative criteria from a public authority in Europe. Modulos supported the work of the sandbox.

AESIA

EU AI Pact

Voluntary early commitment

An EU AI Pact signatory

Modulos is a signatory of the EU AI Pact, the European Commission’s voluntary pledge framework that asks companies to start applying AI Act principles ahead of the regulation’s full applicability. The Pact’s three core pledges are an AI governance strategy, mapping high-risk AI systems against the Act’s criteria, and promoting AI literacy across staff.

Public AI Pact signatory list

And the certifiable management system the proof flows into

Modulos holds CertX product conformity certification against ISO/IEC 42001 (certificate 213-001/24). ISO/IEC 42001 is not a substitute for AI Act conformity assessment, but it is the management-system spine most mature AI Act programs build on. See the ISO 42001 page.

How the EU AI Act stacks with other frameworks

Most organisations operate the AI Act alongside other regulations and standards rather than instead of them. Here is where the Act sits relative to the frameworks teams most often ask about.

Standard / RegulationDomainRelation to the EU AI Act
ISO/IEC 42001International AI management systemComplementary
NIST AI RMFVoluntary U.S. risk-management frameworkComplementary
GDPRBinding EU data-protection regulationDifferent layer
EU Machinery RegulationSafety-critical machinery in the EUOperational glue

EU AI Act vs ISO/IEC 42001

Complementary

ISO/IEC 42001 is the certifiable international management system standard for AI. Article 17 of the AI Act requires high-risk providers to operate a quality management system. Holding ISO 42001 certification is one of the strongest practical signals of meeting Article 17, although not formally a substitute. Most mature AI Act compliance programs land inside an ISO 42001 management system.

EU AI Act vs NIST AI RMF

Complementary

NIST AI RMF is voluntary U.S. guidance organised around four core functions (Govern, Map, Measure, Manage). The AI Act is binding EU regulation. Implementing AI RMF builds the risk-management practices Article 9 expects of high-risk providers, but it does not substitute for the Act's conformity-assessment and CE-marking requirements. See the NIST AI RMF page.

EU AI Act vs GDPR

Different layer

Both the AI Act and the GDPR are binding EU regulations, but they govern different layers. GDPR governs personal data; the AI Act governs AI systems. They overlap sharply on biometric processing, bias-testing with sensitive data, and the rights of individuals affected by automated decisions. Compliance with the GDPR does not satisfy the AI Act, and vice versa; both apply.

EU AI Act vs EU Machinery Regulation

Operational glue

Under the 7 May 2026 AI Omnibus provisional agreement, machinery is exempt from direct AI Act applicability. AI-specific health and safety requirements for AI systems classified as high-risk under the AI Act are routed through delegated acts under the Machinery Regulation itself, with a Commission obligation to issue guidance to economic operators. Pending formal adoption and Official Journal publication. The Canton of Zurich Innovation Sandbox for AI report still illustrates the value of running a single integrated AI management system across regimes. See the Sandbox report (PDF).

Comparing platforms? See how 20 AI governance platforms address the EU AI Act in our 2026 enterprise buyer’s guide.

FAQ about EU AI Act

The EU AI Act (Regulation 2024/1689) is the European Union’s comprehensive law on artificial intelligence, in force since 1 August 2024. It is a product safety regulation: it bans some AI practices outright, requires conformity assessments and CE marking for high-risk AI systems, imposes transparency obligations, and sets model-level obligations for general-purpose AI providers.

The EU AI Act came into force on 1 August 2024. Obligations apply on staggered dates: Article 5 prohibitions and Article 4 AI literacy from 2 February 2025; Chapter V general-purpose AI model-provider obligations from 2 August 2025; general application on 2 August 2026. The 7 May 2026 AI Omnibus provisional agreement would shift Annex III high-risk obligations to 2 December 2027 and Annex I product-integrated high-risk obligations to 2 August 2028, pending formal adoption and OJ publication.

Yes. The AI Act has been legally in force since 1 August 2024. Already applicable: Article 5 prohibitions and Article 4 AI literacy from February 2025; Chapter V general-purpose AI model-provider obligations from August 2025. Under the 7 May 2026 AI Omnibus provisional agreement (pending formal adoption and OJ publication), Annex III high-risk obligations would apply from 2 December 2027 and Annex I product-integrated high-risk obligations from 2 August 2028.

A regulation. It applies directly and uniformly across all 27 EU member states from the moment it enters into force, without requiring national implementing legislation. This is the same legal instrument as the GDPR. Member states cannot soften or vary its core obligations through domestic law.

It bans a small set of AI practices outright; requires high-risk AI systems to undergo conformity assessment and CE marking before being placed on the market; imposes transparency obligations on AI systems that interact with people or generate synthetic content; and sets model-level obligations for general-purpose AI providers, including documentation and copyright-related disclosures.

Article 5 prohibits social scoring of natural persons that meets the conditions in Article 5(1)(c); manipulative or exploitative AI that causes harm; real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions); biometric categorisation based on sensitive characteristics; emotion recognition in workplaces and education; predictive policing based solely on profiling; and untargeted facial-image scraping for facial recognition databases.

Two routes under Article 6. Annex I: AI systems that are safety components of products covered by listed Union harmonisation legislation (medical devices, machinery, vehicles, radio equipment, toys, and others), where Article 6(1)(b) requires the product to undergo a third-party conformity assessment. Annex III: stand-alone AI systems in eight domains, namely biometrics, critical infrastructure, education, employment, essential public and private services, law enforcement, migration and border control, and the administration of justice and democratic processes.

Article 2 sets the scope: providers placing AI systems on the EU market (regardless of where the provider is established); deployers established or located in the Union; importers and distributors of AI systems in the EU; product manufacturers placing AI systems on the EU market; authorised representatives of non-EU providers; and providers and deployers in third countries when the AI system outputs are used in the Union.

Yes, if the UK company places an AI system on the EU market or its AI system outputs are used in the EU. The Act applies based on who is affected, not where the provider is headquartered. The UK’s post-Brexit status does not exempt UK companies from EU AI Act obligations when EU users or markets are involved.

Yes. A US company selling an AI system into the EU market, or whose AI system outputs are used by EU customers, is in scope. A US provider placing a high-risk AI system on the EU market must appoint an authorised representative established in the EU under Article 22. Non-EU providers of general-purpose AI models have a separate authorised-representative rule under Article 54.

Inventory your AI systems; classify each against the four independent gates (prohibited, high-risk, transparency, GPAI); confirm your role under Articles 3 and 25 (provider, deployer, importer, distributor). If you are a provider of a high-risk AI system, the Article 16 stack applies: run a conformity assessment under Article 43, build the Article 11 technical file, draw up the EU Declaration of Conformity, affix the CE mark under Article 48, and operate a quality management system under Article 17 (typically through ISO/IEC 42001); register the system under Article 49 where required (mainly Annex III, including systems self-exempted from high-risk classification under Article 6(3) — the AI Omnibus provisional agreement would preserve registration for those). Deployers operate under Article 26 instead, with a narrower set of duties unless Article 25 reclassifies them as providers.

Up to €35 million or 7% of global annual turnover for prohibited-practice violations, whichever is higher. Up to €15 million or 3% for most other obligations. Up to €7.5 million or 1% for supplying incorrect, incomplete, or misleading information to authorities. Small and medium enterprises and start-ups face proportionally lower caps under Article 99(6). National market surveillance authorities enforce most penalties; Article 101 fines on GPAI model providers are imposed by the European Commission directly.

ISO/IEC 42001 is the international management system standard for AI. AI Act Article 17 requires high-risk AI providers to implement a quality management system. Holding ISO 42001 certification is one of the strongest practical signals of meeting Article 17, though not formally a substitute. Most mature AI Act compliance programs operate inside an ISO 42001 management system.

NIST AI RMF is voluntary US guidance for AI risk management. The EU AI Act is binding EU regulation with legal obligations and penalties. Implementing NIST AI RMF builds the risk-management practices the Act expects, particularly Article 9 risk management for high-risk providers, but it does not substitute for the Act’s specific compliance and conformity-assessment requirements.

Not legally yet. On 7 May 2026, Council and Parliament reached a provisional AI Omnibus agreement that would shift Annex III stand-alone high-risk obligations from 2 August 2026 to 2 December 2027 and Annex I product-integrated high-risk obligations from 2 August 2027 to 2 August 2028. The deal would also add a new Article 5 prohibition on non-consensual intimate content and CSAM, compress the Article 50 watermarking grace period to 3 months (deadline 2 December 2026), and postpone national regulatory sandboxes to 2 August 2027. Pending formal adoption and OJ publication; treat the agreed dates as the operative planning baseline.

Modulos automates the compliance workflow for high-risk AI systems: AI system inventory, classification across the four gates, Article 9 risk management, Article 10 data governance, Article 11 technical documentation, Article 13 transparency, Article 17 quality management, and ongoing monitoring. The platform holds CertX product conformity certificate 213-001/24 against ISO/IEC 42001:2023, the standard most relevant to demonstrating Article 17 QMS requirements.

Ensure Your AI Compliance

Whether you are already using or considering AI in your business, keeping these upcoming regulatory changes in mind is essential. Modulos can support your compliance journey.