High-Risk AI Deadline
2 August 2026
---
DAYS
-
MONTHS
--
WEEKS

Get Ready for
the EU AI Act

The EU Artificial Intelligence Act is setting a global standard for AI regulation, as GDPR did for data privacy. Here, we provide an overview of the Act and guide you on how to prepare your AI systems for compliance.

Timeline and Compliance Milestones

The EU AI Act entered into force on 1 August 2024 after a three-year legislative process. Since then, key milestones have already taken effect: prohibited AI practices became enforceable in February 2025, along with mandatory AI literacy requirements for all staff handling AI systems.

As of August 2025, general-purpose AI providers must comply with transparency and documentation obligations. The next major deadline is August 2026, when high-risk AI system requirements become enforceable.

August 2024

The Act officially enters into force

February 2025

Prohibitions on unacceptable risk enter into force and the implementation of AI literacy requirements

August 2025

Obligations for GPAI providers, as well as regulations on notifications to authorities and fines go into effect

You are here
4
February 2026

Commission implementing act on post-market monitoring

5
August 2026

Obligations for high-risk AI systems in biometrics, critical infrastructure, and law enforcement (Annex III)

6
August 2027

Obligations for high-risk AI systems as safety components or products requiring third-party conformity assessment (Annex I)

7
By End of 2030

Compliance for AI systems in large-scale IT systems under EU law in Freedom, Security, and Justice

EU AI Act: How Compliance Actually Works

The EU AI Act doesn't sort AI systems into tidy risk tiers. It runs four independent checks, and the obligations stack. A single AI system can trigger multiple gates simultaneously.

Most guides get this wrong. Here's how compliance actually works.

1
GATE 1 · Article 5

Prohibited Practices

Does this AI practice cross a red line?

2
GATE 2 · Annex III

High-Risk Systems

Is this AI used in a high-stakes domain?

3
GATE 3 · Article 50

Transparency

Does this AI interact with people, detect emotions, or generate synthetic media?

4
GATE 4 · Chapter V

General-Purpose AI

Are you providing a foundation model or GPAI?

Obligations stack: one system can trigger multiple gates

Examples

Credit Scoring Chatbot
1
2
3
4

High-risk (essential services) + Transparency (human interaction)

Customer Service Bot
1
2
3
4

Transparency only: disclose it's AI

Medical Triage LLM
1
2
3
4

All three: High-risk + Transparency + GPAI obligations

The EU AI Act Covers More Than You Think

You thought you had 3 AI systems. You probably have 50. The Act's definition is broad, and most of your AI is hiding below the surface.

What everyone pictures
Large Language Models
Image Generators
Code Assistants
Autonomous Vehicles
↑ Visible
Hidden ↓
High-Risk (Gate 2)
Transparency (Gate 3)
Minimal Risk

The average enterprise has 10x more AI systems
than they assume.

Most haven't been inventoried.

EU AI Act Definition (Article 3)

An AI system is a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

What's hiding in your stack?

The EU AI Act Follows Your AI

Like GDPR, the EU AI Act is extraterritorial. It applies based on who you affect, not where you're headquartered.

Loading map...

The Chain of Responsibility

A real-world example of how the EU AI Act reaches across borders

Company A
Chile
Provider

Builds AI credit scoring model

Covered by EU AI Act

System placed on EU market through value chain

Company B
United States
Deployer

Licenses model for fintech platform

Covered by EU AI Act

Deploying high-risk AI affecting EU persons

EU Customers
European Union
Affected Persons

Credit decisions made about them

Protected by the EU AI Act

Compliance Requirements

The Act lays out a range of requirements for high-risk AI systems, covering:

Risk Management SystemArticle 9
Data and Data GovernanceArticle 10
Technical DocumentationArticle 11
Record KeepingArticle 12
Transparency and provision of information to userArticle 13
Human OversightArticle 14
Accuracy, Robustness and CybersecurityArticle 15
Quality Management SystemArticle 17
Fundamental Rights Impact Assessment*Article 27

* Required only for public sector deployers and private deployers using high-risk AI for credit scoring or life/health insurance risk assessment.

How Modulos Helps You Meet Every Requirement

The Modulos AI Governance Platform addresses each EU AI Act obligation with purpose-built tools.

Risk Management
Quantitative risk assessment with Monte Carlo simulation
Documentation & Records
AI Agents auto-generate and find evidence in your repos
Human Oversight & QMS
Built-in review workflows with full audit trail
Multi-Framework Compliance
140+ controls mapped to EU AI Act, ISO 42001, NIST

Conformity Assessments

High-risk AI systems must undergo Conformity Assessments to demonstrate compliance before market entry. This structured process ensures your AI systems meet regulatory requirements.

Step 1 - A high-risk AI system is developed

Establish, implement, document, and maintain a risk management system to address the risks posed by a high-risk AI system.

Step 2 - The system undergoes the conformity assessment and complies with AI requirements

- Implement effective data governance, including bias mitigation, training, validation, and testing of data sets.

- Maintain up-to-date technical documentation in a clear and comprehensive manner.

Once substantial changes happen in the AI system's lifecycle, repeat from Step 2.

Step 3 - Registration of stand-alone systems in an EU database.

- Ensure that high-risk AI systems allow for the automatic recording of events (logs) over their lifetime.

- Design systems to ensure sufficient transparency for deployers to interpret outputs and use appropriately.

Step 4 - A declaration of conformity is signed, and the AI system should bear the CE marking

- Develop systems to maintain an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle.

- Ensure proper human oversight during the period the system is in use.

CE Mark

The system can be placed on the market.

System placed on market

Disclaimer:

The steps outlined above are intended to provide a general overview of the conformity assessment process. They should not be considered exhaustive and are not intended as legal or technical advice.

Understanding Roles and Responsibilities

The EU AI Act outlines specific roles and responsibilities for stakeholders in the AI system lifecycle:

Providers

Role: Develop and market AI systems
Responsibilities: Maintain technical documentation, ensure compliance with the Act, and provide transparency information.

Deployers

Role: Use AI systems within their operations.
Responsibilities: Conduct impact assessments, notify authorities, and involve stakeholders in the assessment process.

Importers

Role: Market AI systems from third countries.
Responsibilities: Verify compliance, provide necessary documentation, and cooperate with authorities.

Distributors

Role: Make AI Systems available in the market
Responsibilities: Verify CE marking and conformity, take corrective actions if needed, and cooperate with authorities.

Modifying AI Systems

Significant modifications, such as altering core algorithms or retraining with new data, may reclassify you as a provider, necessitating adherence to provider obligations.

Penalties for Non-Compliance

The EU AI Act imposes significant fines for non-compliance, calculated as a percentage of the offending company's global annual turnover or a predetermined amount, whichever is higher. Provisions include more proportionate caps on administrative fines for SMEs and start-ups.

Ensure your AI systems comply with the EU AI Act to avoid these penalties.

Request a Demo

Penalty Breakdown

Non-compliance with prohibitions

Up to
€35M
or 7% of turnover

Supplying incorrect, incomplete, or misleading information

Up to
€7.5M
or 1.5% of turnover

Non-compliance with other obligations

Up to
€15M
or 3% of turnover

Download the EU AI Act Guide

Learn how to ensure your AI systems comply with the EU AI Act. This guide provides a clear overview of the regulation, mandatory compliance requirements, and how to prepare your AI operations for these changes.

Download the Guide
Modulos

EU AI Act Guide

Foundations and
Practical Insights

FAQ about EU AI Act

The EU AI Act is the European Union's flagship law to regulate how AI systems should be designed and deployed. It aims to protect fundamental rights, ensure safety, and foster innovation while creating a harmonized legal framework across the EU.

The EU AI Act mandates that AI system providers based in the EU comply with the regulation. Moreover, the Act also applies to providers and deployers outside the EU whose AI systems are used in the EU market. This means organizations worldwide may need to comply if their AI products or services reach EU users.

The situation is similar to the global reach of General Data Protection Regulation (GDPR). The AI Act applies to providers outside the EU when their AI system output is used in the EU. Non-EU deployers using AI systems in the EU are also covered. This extraterritorial scope means companies worldwide must assess their AI offerings for EU compliance.

On 1 August 2024, the EU AI Act officially entered into force. The Act will become fully applicable by August 2027, with different provisions taking effect at various milestones: prohibitions on unacceptable risk (February 2025), GPAI obligations (August 2025), high-risk system requirements (August 2026-2027).

To be ready for the EU AI Act, companies will have to adhere to the extensive requirements stipulated in the regulation. Key steps include: conducting an AI systems inventory, classifying systems by risk level, implementing required documentation and risk management systems, ensuring data governance practices, and establishing human oversight mechanisms.

According to the EU AI Act, significant modifications to an AI system can change your role from a deployer to a provider, triggering additional compliance obligations. Key modifications that may reclassify you include: • Altering Core Algorithms: Changes to the fundamental logic or algorithms of the AI system. • Re-training with New Data: Using new datasets for training that substantially alter the system's performance or behavior. • Integration with Other Systems: Modifying how the AI system interacts with other hardware or software components. Implications of becoming a provider include increased responsibilities such as complying with all provider obligations under the Act, including conformity assessments, documentation requirements, and ongoing monitoring obligations.

Ensure Your AI Compliance

Whether you are already using or considering AI in your business, keeping these upcoming regulatory changes in mind is essential. Modulos can support your compliance journey.