Get Ready for the EU AI Act
The EU Artificial Intelligence Act is setting a global standard for AI regulation, as GDPR did for data privacy. Here, we provide an overview of the Act and guide you on how to prepare your AI systems for compliance.
How is an AI System Defined?
According to the EU AI Act, an 'Artificial Intelligence system' is defined as a machine-based system designed to operate with varying levels of autonomy. It may exhibit adaptiveness after deployment and, for explicit or implicit objectives, infers from the input it receives to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The following table outlines the core components and characteristics of this definition:
| Category | What This Covers | Examples |
|---|---|---|
| Machine-Based System | Relies on hardware and software to function (data processing, model training, decision-making, etc.) | Traditional servers running trained models, Quantum computers running AI algorithms, Cloud-based AI services processing requests via APIs |
| Varying Levels of Autonomy | Operates with some degree of independence from direct human control. Can range from semi-automated to fully autonomous. | Chatbots that respond to user queries but let humans override them, Autonomous drones or robots that operate independently within defined parameters |
| Adaptiveness (Optional) | May evolve or learn post-deployment (self-learning, model updates). Not strictly required for a system to qualify as AI under the Act. | Recommendation algorithms that refine suggestions with each user interaction, Machine learning models continuously trained on new data |
| Objectives (Explicit/Implicit) | Systems can be programmed with clear goals (explicit) or develop them from patterns in data (implicit). | A language model aiming to minimise prediction errors vs. a chatbot intended for legal consulting, A fraud detection system programmed for specific rules vs. one that identifies anomalies autonomously |
| Infers How to Generate Outputs | Core feature distinguishing AI from simpler software: uses machine learning or logic-based inference to produce outputs. | Supervised learning (spam detection), unsupervised learning (anomaly detection), reinforcement learning (game-playing agents) |
| Generates Outputs with Real Impact | Produces predictions, recommendations, content, or decisions that can shape physical or virtual environments. | Predictive maintenance in factories, Generative text/image models in digital marketing, Automated hiring systems making employment recommendations |
| Interaction with Environments | AI systems aren't passive; they actively change or affect the context in which they're deployed. This can involve physical systems or digital platforms. | Self-driving cars adjusting speed in traffic, An AI content-filtering system that moderates an online community |
Risk-Based Classification
The EU AI Act introduces a risk-based classification for AI applications, categorizing them from minimal to unacceptable risk. High-risk AI systems require rigorous compliance, including risk management and data governance.
The AI Act introduces various classifications for AI systems which may overlap. Some AI use cases may fall into several categories simultaneously.
What is my AI system risk?AI applications such as social scoring systems and manipulative technologies are banned due to their potential for significant harm.
High-risk AI applications, like those evaluating creditworthiness or critical infrastructure, require comprehensive compliance measures including risk management, data governance, and regular auditing.
Image and video processing, recommender systems, or chatbots, still carry obligations with them, such as transparency and disclosure requirements.
AI applications deemed to have minimal risk, such as spam filters and video games, are not subjected to specific regulatory requirements under the Act.
Compliance Requirements
The Act lays out a range of requirements for high-risk AI systems, covering:
Limited risk systems are evaluated under the same categories, but face fewer scrutiny levels.
Aligning with industry standards like ISO/IEC 42001:2023 – AI Management System – can help organizations demonstrate conformity with the EU AI Act requirements. Learn more about ISO/IEC 42001 →
Conformity Assessments
High-risk AI systems must undergo Conformity Assessments to demonstrate compliance before market entry. This structured process ensures your AI systems meet regulatory requirements.
Step 1 - A high-risk AI system is developed
Establish, implement, document, and maintain a risk management system to address the risks posed by the AI system throughout its lifecycle.
Step 2 - The system undergoes the conformity assessment and complies with AI requirements
- Implement effective data governance, including bias mitigation, training, validation, and testing data set management. - Prepare technical documentation that describes the AI system, its development process, and how it meets regulatory requirements.
Step 3 - Registration of stand-alone systems in an EU database.
- Ensure that high-risk AI systems allow for the automatic recording of events (logs) over their lifetime. - Enable human oversight by designing systems to allow human intervention.
Step 4 - A declaration of conformity is signed, and the AI system should bear the CE marking
- Develop systems to maintain an appropriate level of accuracy, robustness, and cybersecurity throughout the entire lifecycle. - Establish a quality management system ensuring ongoing compliance.
Disclaimer:
The steps outlined above are intended to provide a general overview of the conformity assessment process for high-risk AI systems under the EU AI Act. This is not an exhaustive representation of all legal requirements. Organizations should consult the official regulatory text and seek legal advice to ensure full compliance.
Understanding Roles and Responsibilities
The EU AI Act outlines specific roles and responsibilities for stakeholders in the AI system lifecycle:
Providers
Develop and market AI systems
Maintain technical documentation, ensure compliance with the Act, and provide transparency information.
Deployers
Use AI systems within their operations.
Conduct impact assessments, notify authorities, and involve stakeholders in the assessment process.
Importers
Market AI systems from third countries.
Verify compliance, provide necessary documentation, and cooperate with authorities.
Distributors
Make AI Systems available in the market
Verify CE marking and conformity, take corrective actions if needed, and cooperate with authorities.
Modifying AI Systems
Significant modifications, such as altering core algorithms or retraining with new data, may reclassify you as a provider, necessitating adherence to provider obligations.
Download the EU AI Act Guide
Learn how to ensure your AI systems comply with the EU AI Act. This guide provides a clear overview of the regulation, mandatory compliance requirements, and how to prepare your AI operations for these changes.
Download the Guide
Timeline and Compliance Milestones
In April 2021, the EU Commission released the full proposed EU AI Act, initiating the legislative process. After deliberations among the European Commission, Parliament, and Council, the final text was approved in March 2024.
The Act officially entered into force on 1 August 2024. By 2 February 2025, all providers and deployers of AI systems must ensure AI literacy for their staff handling these technologies.
What is my AI risk?August 2024
The Act officially enters into force
6 Months After (February 2025)
Prohibitions on unacceptable risk enter into force and the implementation of AI literacy requirements
12 Months After (August 2025)
Obligations for GPAI providers, as well as regulations on notifications to authorities and fines go into effect
18 Months After (February 2026)
Commission implementing act on post-market monitoring
24 Months After (August 2026)
Obligations for high-risk AI systems in biometrics, critical infrastructure, and law enforcement
36 Months After (August 2027)
Obligations for high-risk AI systems as a safety components or products requiring third-party conformity assessment
By End of 2030
Compliance for AI systems in large-scale IT systems under EU law in Freedom, Security, and Justice
Penalties for Non-Compliance
The EU AI Act imposes significant fines for non-compliance, calculated as a percentage of the offending company's global annual turnover or a fixed amount, whichever is higher.
Ensure your AI systems comply with the EU AI Act to avoid these penalties.
Request a DemoPenalty Breakdown
Non-compliance with prohibitions
Non-compliance with other obligations
Supplying incorrect, incomplete, or misleading information
FAQ about EU AI Act
The EU AI Act is the European Union's flagship law to regulate how AI systems should be designed and deployed. It aims to protect fundamental rights, ensure safety, and foster innovation while creating a harmonized legal framework across the EU.
The EU AI Act mandates that AI system providers based in the EU comply with the regulation. Moreover, the Act also applies to providers and deployers outside the EU whose AI systems are used in the EU market. This means organizations worldwide may need to comply if their AI products or services reach EU users.
The situation is similar to the global reach of General Data Protection Regulation (GDPR). The AI Act applies to providers outside the EU when their AI system output is used in the EU. Non-EU deployers using AI systems in the EU are also covered. This extraterritorial scope means companies worldwide must assess their AI offerings for EU compliance.
On 1 August 2024, the EU AI Act officially entered into force. The Act will become fully applicable by August 2027, with different provisions taking effect at various milestones: prohibitions on unacceptable risk (February 2025), GPAI obligations (August 2025), high-risk system requirements (August 2026-2027).
To be ready for the EU AI Act, companies will have to adhere to the extensive requirements stipulated in the regulation. Key steps include: conducting an AI systems inventory, classifying systems by risk level, implementing required documentation and risk management systems, ensuring data governance practices, and establishing human oversight mechanisms.
According to the EU AI Act, significant modifications to an AI system can change your role from a deployer to a provider, triggering additional compliance obligations. Key modifications that may reclassify you include: • Altering Core Algorithms: Changes to the fundamental logic or algorithms of the AI system. • Re-training with New Data: Using new datasets for training that substantially alter the system's performance or behavior. • Integration with Other Systems: Modifying how the AI system interacts with other hardware or software components. Implications of becoming a provider include increased responsibilities such as complying with all provider obligations under the Act, including conformity assessments, documentation requirements, and ongoing monitoring obligations.
Ensure Your AI Compliance
Whether you are already using or considering AI in your business, keeping these upcoming regulatory changes in mind is essential. Modulos can support your compliance journey.