EU AI Act Risk Categories Explained: Why the Four-Tier Pyramid Is Wrong

The EU AI Act Risk Category Problem
Search "EU AI Act risk categories" and you'll find the same pyramid everywhere: unacceptable, high, limited, minimal. Four tiers. Neat hierarchy. One system, one category.
That model is wrong, and not just oversimplified. The pyramid is structurally misleading in ways that cause real compliance failures.
The EU AI Act does not sort AI systems into mutually exclusive risk tiers. Instead, it runs four independent compliance checks, and the obligations stack. A single AI system can trigger multiple checks simultaneously, which means understanding this distinction is the difference between actual compliance and checking the wrong boxes.
The Pyramid Problem
The "four-tier risk pyramid" appears in Commission communications, consulting decks, and nearly every explainer article. But look at the actual legislative text: the term "limited risk" does not appear as a risk classification category. Article 50's transparency requirements function as a parallel track that applies across risk levels rather than constituting a separate tier.
A credit-scoring chatbot is both high-risk (essential services under Annex III) and subject to transparency obligations (human interaction under Article 50). The obligations stack. The pyramid model would have you pick one.
This matters because compliance planning based on the pyramid will miss obligations. If you think transparency requirements only apply to "limited risk" systems, you'll overlook the disclosure requirements that also apply to your high-risk systems.
How Compliance Actually Works: Four Gates
The EU AI Act does not sort AI systems into mutually exclusive risk tiers. Instead, it runs four independent checks. Think of them as gates rather than tiers.
Gate 1: Prohibited Practices (Article 5)
Question: Does this AI practice cross a fundamental rights red line?
Consequence: Banned. Full stop.
Eight categories of AI practices are prohibited entirely:
- Social scoring by public authorities
- Real-time remote biometric identification in public spaces (with narrow law enforcement exceptions)
- Emotion recognition in workplaces and educational institutions
- Biometric categorization inferring sensitive characteristics (race, political opinions, sexual orientation, religious beliefs)
- Untargeted scraping for facial recognition databases
- AI exploiting vulnerabilities of specific groups (age, disability, social/economic situation)
- AI designed to manipulate behavior causing significant harm
- AI assessing risk of criminal offending based solely on profiling
If your system falls here, no compliance pathway exists. Redesign or discontinue.
Gate 2: High-Risk Systems (Article 6 + Annexes I and III)
Question: Is this AI used in a high-stakes domain or as a safety component?
Consequence: Full compliance regime requiring conformity assessment, technical documentation, risk management, human oversight, EU database registration, and post-market monitoring.
Two pathways trigger high-risk classification:
Pathway A (Annex I): AI serving as a safety component of products covered by EU harmonization legislation requiring third-party conformity assessment. Medical devices, machinery, toys, lifts, radio equipment, vehicles, aircraft. These integrate with existing sectoral product safety frameworks.
Pathway B (Annex III): Standalone AI systems in eight high-stakes domains:
- Biometrics (remote identification, categorization, emotion recognition)
- Critical infrastructure (road traffic, utilities, digital infrastructure)
- Education (admissions, assessment, proctoring)
- Employment (recruitment, performance evaluation, task allocation)
- Essential services (credit scoring, insurance risk assessment, emergency dispatch)
- Law enforcement (evidence evaluation, recidivism prediction, profiling)
- Migration and border control (risk assessment, application examination)
- Administration of justice (legal research assistance, voter influence)
Annex III systems can claim exemption under Article 6(3) if they do not materially influence decision outcomes, which includes narrow procedural tasks, improving prior human work, pattern detection without replacement, and preparatory tasks only. But any system performing profiling is always high-risk regardless of exemptions.
Gate 3: Transparency Requirements (Article 50)
Question: Does this AI interact with people, detect emotions, or generate synthetic content?
Consequence: Disclosure and labeling obligations.
Transparency requirements under Article 50 operate as a parallel track rather than a risk tier, which means they apply regardless of whether Gate 2 triggered:
- AI systems interacting directly with humans must disclose that they are AI (unless obvious from context)
- Emotion recognition and biometric categorization systems must inform subjects
- Synthetic audio, image, video, or text must be machine-readable as AI-generated
- Deepfakes must be disclosed (with exceptions for creative/satirical work)
A high-risk HR screening system that uses a chatbot interface triggers both Gate 2 (high-risk) and Gate 3 (transparency). A simple customer service chatbot might only trigger Gate 3. The gates are independent.
Gate 4: General-Purpose AI (Chapter V)
Question: Are you providing a foundation model or general-purpose AI system?
Consequence: Model-level obligations including documentation and copyright compliance, with systemic risk models facing additional requirements for evaluation, incident reporting, and cybersecurity.
GPAI obligations attach to the model provider rather than the downstream deployer. If you deploy GPT-4 in a high-risk application, OpenAI has GPAI obligations and you have high-risk deployer obligations, with the two tracks running in parallel.
GPAI models with systemic risk (currently defined as >10²⁵ FLOPs training compute) face additional requirements: adversarial testing, incident tracking, and ensuring adequate cybersecurity.
How Gates Stack: Real Examples
This is where the pyramid model breaks down completely.
Credit Scoring Chatbot
- Gate 1: Not prohibited ✓
- Gate 2: High-risk (creditworthiness assessment under Annex III) ✓
- Gate 3: Transparency required (human interaction) ✓
- Gate 4: Depends on underlying model
Obligations from Gates 2 AND 3 both apply. The pyramid would force you to pick "high risk" or "limited risk" while missing that transparency obligations also attach.
Customer Service Bot
- Gate 1: Not prohibited ✓
- Gate 2: Not high-risk ✓
- Gate 3: Transparency required (human interaction) ✓
- Gate 4: Depends on underlying model
Only Gate 3 triggers. The pyramid misleadingly calls this "limited risk," but transparency obligations are a disclosure requirement rather than a risk classification.
Medical Triage LLM
- Gate 1: Not prohibited ✓
- Gate 2: High-risk (emergency dispatch under Annex III) ✓
- Gate 3: Transparency required (human interaction, possibly synthetic content) ✓
- Gate 4: GPAI obligations apply to model provider ✓
Three gates trigger simultaneously, with the deployer holding high-risk obligations and transparency obligations while the model provider holds GPAI obligations. The pyramid cannot represent this structure.
Spam Filter
- Gate 1: Not prohibited ✓
- Gate 2: Not high-risk ✓
- Gate 3: No direct human interaction requiring disclosure ✓
- Gate 4: Not GPAI ✓
No gates trigger. This qualifies as genuinely minimal-risk, though the classification follows from not triggering any of the independent compliance checks rather than from placement in a designated tier.
Annex III High-Risk Domains: The Deep Dive
Gate 2 deserves detailed treatment because most enterprise compliance work concentrates here. Annex III defines eight domains where AI systems are presumptively high-risk. Understanding the actual regulatory language and the recitals' reasoning helps distinguish genuinely covered systems from those that merely seem adjacent.
1. Biometrics (Recitals 54, 159)
Three sub-categories trigger high-risk classification:
(a) Remote biometric identification systems—meaning 1:n matching against databases of enrolled individuals. The Act explicitly excludes verification-only systems (1:1 matching to confirm a claimed identity) from this category.
(b) Biometric categorisation that infers sensitive or protected attributes. The key word is "infer"—systems that deduce race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation from biometric data fall here.
(c) Emotion recognition systems in workplaces and educational institutions.
Practical distinction: A facial recognition system confirming you are who your badge says you are (verification) is not high-risk under this domain. A system scanning a crowd to identify individuals against a watchlist (identification) is.
2. Critical Infrastructure (Recital 55)
This domain covers AI systems used as safety components in managing critical digital infrastructure, road traffic, or the supply of water, gas, heating, or electricity. Recital 55 explains the rationale: failure or malfunctioning in these contexts may risk life or health at large scale, or cause major disruption to social and economic activities.
What counts as a "safety component": Systems that protect physical integrity of infrastructure or users, even if not strictly necessary for the infrastructure to function. The recital gives concrete examples—AI monitoring water pressure in distribution systems, or AI controlling fire alarms in cloud computing data centres.
What doesn't: Operational optimization systems that improve efficiency but don't serve a safety-critical function.
3. Education and Vocational Training (Recital 56)
Four sub-categories:
(a) Systems determining access to, admission to, or assignment to educational institutions at all levels
(b) Systems evaluating learning outcomes, including those steering the learning process itself
(c) Systems assessing the appropriate level of education an individual will receive or be able to access
(d) Systems monitoring or detecting prohibited student behaviour during examinations
Recital 56 makes the stakes explicit: these systems determine educational and professional course of life, affecting ability to secure a livelihood. The recital specifically warns about perpetuating historical discrimination patterns affecting women, certain age groups, persons with disabilities, and persons of particular racial or ethnic origins or sexual orientation.
The coverage is broad: Adaptive learning platforms that steer curriculum paths, automated essay grading that affects progression, AI proctoring that flags "suspicious" behaviour—all fall within scope if they materially influence outcomes.
4. Employment, Workers Management, and Access to Self-Employment (Recital 57)
Two sub-categories with extensive reach:
(a) Recruitment and selection: targeted job advertisements, CV and application filtering, and candidate evaluation in interviews or tests
(b) Workplace decisions: systems affecting terms of work-related relationships, promotion and termination decisions, task allocation based on individual behaviour or personal traits, and performance or behaviour monitoring
Recital 57 echoes the discrimination concerns from education—historical patterns disadvantaging women, certain age groups, persons with disabilities, or persons of particular racial or ethnic origins or sexual orientation. It adds a distinct concern: undermining fundamental rights to data protection and privacy through workplace surveillance.
Coverage includes: Automated résumé screening, AI interview analysis, productivity monitoring software that influences performance reviews, algorithmic task assignment in gig work platforms.
5. Access to Essential Private Services and Public Services and Benefits (Recital 58)
Four sub-categories covering situations where individuals are often in vulnerable positions:
(a) Eligibility evaluation for public assistance benefits and services, including healthcare services—and systems used to grant, reduce, revoke, or reclaim such benefits
(b) Creditworthiness evaluation and credit scoring, with an explicit carve-out for fraud detection
(c) Risk assessment and pricing for life and health insurance
(d) Evaluation and classification of emergency calls, including dispatch and priority-setting for emergency first response services (police, firefighters, medical aid) and emergency healthcare patient triage
Recital 58 explains: these systems can directly impact individuals' livelihood and may infringe rights to social protection, non-discrimination, human dignity, and effective remedy. For essential services, they determine access to housing, electricity, telecommunications, and other necessities. For emergency services, they are genuinely critical for life, health, and property.
The credit scoring carve-out matters: Fraud detection systems are not high-risk under this domain, but systems evaluating whether to extend credit are.
6. Law Enforcement (Recital 59)
This domain covers AI systems used by or on behalf of law enforcement authorities, with five sub-categories:
(a) Individual risk assessment for natural persons—evaluating likelihood of offending or reoffending, or likelihood of becoming a victim
(b) Deception detection—polygraph-adjacent systems and similar "lie detector" technologies
(c) Evaluation of reliability of evidence in criminal investigations or prosecutions
(d) Assessment of risk of a natural person offending or reoffending, not only on basis of profiling but also on assessment of personality traits or past criminal behaviour
(e) Profiling during detection, investigation, or prosecution of criminal offences
The concerns here focus on accuracy, non-discrimination, and due process rights given law enforcement's coercive power.
7. Migration, Asylum, and Border Control Management (Recital 60)
Four sub-categories for systems used by competent public authorities or on their behalf:
(a) Deception detection in the migration context—polygraph-adjacent systems used during visa applications, asylum interviews, or border examinations
(b) Assessment of irregular migration risk, including security, health, or irregular migration risks
(c) Examination of applications for asylum, visa, or residence permits—and associated complaints
(d) Identification of natural persons in migration contexts, with an explicit carve-out for travel document verification
Recital 60 notes that persons in migration situations are in particularly vulnerable positions, and these systems may affect their fundamental rights regarding asylum, free movement, and non-refoulement.
8. Administration of Justice and Democratic Processes (Recitals 61-62)
Two sub-categories with distinct rationales:
(a) Systems intended to assist judicial authorities in researching and interpreting facts and law and applying the law to concrete facts—what might be called "AI legal research on steroids" if it influences judicial reasoning
(b) Systems intended to be used to influence the outcome of an election or referendum, or the voting behaviour of natural persons exercising their vote
Important exclusions for category (b): Systems whose exposure to natural persons is only indirect—campaign logistics tools, accessibility features, and similar support functions that don't directly engage voters with persuasive content.
Recital 62 specifically calls out the threat to democratic processes and fundamental rights of free expression, assembly, and non-discrimination when AI systems directly target voter behaviour.
The Exemption Mechanism (Article 6(3))
Annex III systems can escape high-risk classification if they do not materially influence decision outcomes. Four conditions can establish this (any one suffices):
Narrow procedural task: Data conversion, document classification, and duplicate detection are examples of routine functions with minimal decision impact.
Improving prior human work: Enhancing already-completed human output through language improvement, tone adjustment, or formatting.
Pattern detection without replacement: Flagging anomalies or deviations for human review without replacing or influencing the original assessment.
Preparatory task: File indexing, translation, searching, and data linking that have no direct impact on substantive decisions.
Critical override: Systems performing profiling (automated processing evaluating personal aspects such as work performance, economic situation, health, preferences, behavior, or location) are always high-risk regardless of exemptions.
Claiming exemption requires documentation before market placement, registration in the EU database, and readiness to provide documentation to authorities on request.
Practical Compliance Framework
For each AI system in your portfolio:
Check 1: Does Gate 1 prohibit it? Review Article 5 prohibited practices. If yes → discontinue or fundamentally redesign.
Check 2: Does Gate 2 classify it as high-risk?
- Is it a safety component in Annex I products requiring third-party conformity assessment?
- Does its intended use match an Annex III category?
- If yes to either → presumptively high-risk
- If Annex III: does Article 6(3) exemption apply AND no profiling involved?
- Document assessment either way
Check 3: Does Gate 3 require transparency?
- Does it interact directly with humans?
- Does it detect emotions or categorize biometrically?
- Does it generate synthetic content?
- If yes to any → transparency obligations apply (regardless of Gate 2 outcome)
Check 4: Does Gate 4 apply GPAI obligations?
- Are you the provider of a foundation model or GPAI system?
- If yes → GPAI documentation and transparency requirements
- If systemic risk (>10²⁵ FLOPs) → additional evaluation and incident reporting
Compile total obligations: Sum all triggered gates. A single system may require high-risk conformity assessment and transparency disclosures and, if you are the model provider, GPAI documentation.
Why This Matters
The pyramid model causes three compliance failures:
Missing stacked obligations: Organizations classify a system as "high-risk" and forget that transparency requirements also apply when it interacts with humans.
False comfort from "limited risk": Teams think a chatbot is "only limited risk" without recognizing that if the chatbot does credit pre-screening, the system is also high-risk.
Wrong mental model for GPAI: The pyramid has no place for GPAI obligations, which run on a completely separate track from use-case risk classification.
The gates model matches the actual legislative structure and reflects how the law works in practice. Compliance planning should start here.
Gates vs Pyramid: What the Wrong Model Misses
| Scenario | Pyramid Model Says | Gates Model Says | What You Miss |
|---|---|---|---|
| Credit-scoring chatbot | Pick one: "high risk" or "limited risk" | High-risk (Gate 2) AND transparency (Gate 3) | Transparency obligations |
| HR screening with LLM backend | "High risk" | High-risk (Gate 2) AND transparency (Gate 3) AND GPAI applies to model provider (Gate 4) | GPAI provider obligations |
| Customer service bot | "Limited risk" | Transparency only (Gate 3) | Nothing, but wrong classification rationale |
| Emotion recognition at work | "High risk" | Prohibited (Gate 1) | You cannot deploy this system at all |
| Medical device AI | "High risk" | High-risk via Annex I (Gate 2) with different timeline than Annex III | August 2027 deadline, not August 2026 |
How to Classify an AI System Under the EU AI Act
For organizations asking "is my AI system high-risk under Annex III?" the answer requires running through all four gates sequentially.
Step 1: Screen for prohibited practices (Gate 1) Review Article 5. Social scoring, workplace emotion recognition, and real-time biometric identification in public spaces are banned outright with narrow exceptions. If your system falls here, no compliance pathway exists.
Step 2: Check Annex III high-risk use cases (Gate 2) Does the intended use match one of the eight Annex III domains? Biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice and democratic processes. If yes, the system is presumptively high-risk unless Article 6(3) exemptions apply and no profiling is involved.
Step 3: Check Annex I safety components (Gate 2, alternate pathway) Is the AI a safety component in products covered by EU harmonization legislation requiring third-party conformity assessment? Medical devices, machinery, toys, vehicles, aircraft. Different timeline applies (August 2027 for most).
Step 4: Assess transparency requirements (Gate 3) Does the system interact directly with humans, detect emotions, categorize biometrically, or generate synthetic content? Transparency obligations apply regardless of high-risk status.
Step 5: Determine GPAI applicability (Gate 4) Are you the provider of a foundation model or general-purpose AI system? Model-level documentation and transparency requirements apply. Systemic risk models (>10²⁵ FLOPs) face additional evaluation and incident reporting obligations.
Step 6: Compile total obligations Sum all triggered gates. Document the assessment. Register in the EU database if claiming Annex III exemption.
Modulos helps organizations navigate EU AI Act compliance across all four gates. Our AI governance platform provides systematic classification, documentation management, and obligation tracking for enterprises operating AI systems in Europe. For broader context on building an AI governance program, see our guide to AI governance. Request a demo to see how we map your AI portfolio against the actual regulatory structure.
Ready to Transform Your AI Governance?
Discover how Modulos can help your organization build compliant and trustworthy AI systems.