EU AI Act Regulatory Profile
This profile maps the requirements of Regulation (EU) 2024/1689 (the EU AI Act) to AEEF controls, enabling organizations deploying AI systems in or affecting the European Union to align their AEEF implementation with EU legal obligations. The EU AI Act entered into force on 1 August 2024 with phased enforcement through 2027.
Assessment date: February 22, 2026
Applicability
This profile applies when any of the following conditions exist:
- AI systems are placed on the market or put into service in the European Union
- AI system outputs affect natural persons located in the EU, regardless of provider location
- The organization is established in the EU or uses an EU-based authorized representative
- The AI system falls under General-Purpose AI (GPAI) model obligations
EU AI Act Risk Classification
| EU AI Act Category | Description | AEEF Risk Tier Mapping | Key Obligations |
|---|---|---|---|
| Unacceptable Risk (Prohibited) | Art. 5 prohibited practices | No AEEF equivalent — MUST NOT deploy | Pre-deployment screening required |
| High-Risk | Annex III listed systems | AEEF Tier 3 | Full compliance with Chapter III obligations |
| Limited Risk | Transparency obligations | AEEF Tier 2 | Art. 50 transparency and disclosure |
| Minimal Risk | Voluntary codes of practice | AEEF Tier 1 | Voluntary AEEF governance recommended |
EU AI Act Overlay Control Set
| Control ID | EU AI Act Article | Control Title | AEEF Mapping | Priority |
|---|---|---|---|---|
| EU-AI-01 | Art. 5 | Prohibited Practices Screening | PRD-STD-010 REQ-010-02 | Immediate |
| EU-AI-02 | Art. 6, Annex III | High-Risk Classification | PRD-STD-010 REQ-010-01 | Immediate |
| EU-AI-03 | Art. 9 | Risk Management System | Pillar 2 + PRD-STD-010 | High |
| EU-AI-04 | Art. 10 | Data Governance | PRD-STD-011, PRD-STD-014 | High |
| EU-AI-05 | Art. 11, Annex IV | Technical Documentation | PRD-STD-005, PRD-STD-011 | High |
| EU-AI-06 | Art. 12 | Record-Keeping and Logging | Pillar 2 Retention Policy | High |
| EU-AI-07 | Art. 13 | Transparency to Deployers | PRD-STD-010, PRD-STD-014 | High |
| EU-AI-08 | Art. 14 | Human Oversight | Pillar 1 Human-in-the-Loop | High |
| EU-AI-09 | Art. 15 | Accuracy, Robustness, Cybersecurity | PRD-STD-003, 004, 007 | High |
| EU-AI-10 | Art. 43 | Conformity Assessment | Pillar 5 Maturity + ISO 42001 | Medium |
| EU-AI-11 | Art. 49, 71 | EU Database Registration | New obligation | Medium |
| EU-AI-12 | Art. 72 | Post-Market Monitoring | Transformation Lifecycle | High |
| EU-AI-13 | Art. 73 | Serious Incident Reporting | Pillar 2 Incident Response | High |
| EU-AI-14 | Art. 53-56 | GPAI Model Obligations | PRD-STD-011, PRD-STD-008 | High |
| EU-AI-15 | Art. 50 | AI-Generated Content Transparency | Pillar 2 Code Provenance | Medium |
EU-AI-01: Prohibited Practices Screening (Art. 5)
Organizations MUST screen AI systems against the prohibited practices list before deployment. Systems that perform social scoring, exploit vulnerabilities of specific groups, use real-time remote biometric identification in public spaces (with limited exceptions), or other Art. 5 prohibited uses MUST NOT be deployed.
AEEF Mapping: Extend PRD-STD-010 REQ-010-02 policy boundaries to include an explicit Art. 5 screening checklist.
EU-AI-02: High-Risk Classification (Art. 6, Annex III)
Organizations MUST classify each AI system against Annex III high-risk categories. Classification MUST be documented with rationale, reviewing body, and date. Annex III categories include: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice.
AEEF Mapping: Extend PRD-STD-010 REQ-010-01 risk tiering with EU AI Act classification determination.
EU-AI-03: Risk Management System (Art. 9)
High-risk AI systems MUST implement a continuous risk management system that identifies, analyzes, evaluates, and mitigates risks throughout the lifecycle. The system MUST be proportionate to the risks and regularly updated.
AEEF Mapping: Security Risk Framework combined with PRD-STD-010 safety controls.
EU-AI-04: Data Governance (Art. 10)
Training, validation, and testing datasets MUST meet quality criteria: relevance, representativeness, absence of errors, and completeness. Bias examination procedures MUST be documented. Data governance practices MUST address data collection, preparation, and annotation.
AEEF Mapping: PRD-STD-011 and PRD-STD-014 data governance controls, extended with Training Data Governance.
EU-AI-05: Technical Documentation (Art. 11, Annex IV)
High-risk systems MUST maintain technical documentation covering: general description, design specifications, development process, monitoring and control measures, risk management measures, validation and testing results, and operational information.
AEEF Mapping: PRD-STD-005 documentation requirements combined with PRD-STD-011 model cards.
EU-AI-06: Record-Keeping and Logging (Art. 12)
High-risk AI systems MUST enable automatic logging of events relevant to identifying risks and facilitating post-market monitoring. Logs MUST be retained for the system's lifecycle or the minimum regulatory retention period.
AEEF Mapping: Retention & Audit Policy retention controls.
EU-AI-07: Transparency and Information to Deployers (Art. 13)
High-risk systems MUST be designed for sufficient transparency for deployers to interpret outputs. Instructions for use MUST include: intended purpose, accuracy levels, known limitations, human oversight measures, and expected operational lifetime.
AEEF Mapping: PRD-STD-010 trust controls and PRD-STD-014 REQ-014-23 automated decision disclosure.
EU-AI-08: Human Oversight (Art. 14)
High-risk systems MUST be designed for effective human oversight including the ability to: understand system capabilities and limitations, monitor operation, interpret outputs, decide not to use or override outputs, and interrupt operation.
AEEF Mapping: Pillar 1 Human-in-the-Loop controls.
EU-AI-09: Accuracy, Robustness, and Cybersecurity (Art. 15)
High-risk systems MUST achieve appropriate levels of accuracy, robustness, and cybersecurity. Resilience against errors, faults, and manipulation attempts MUST be addressed through technical redundancy and security measures.
AEEF Mapping: PRD-STD-003 testing, PRD-STD-004 security scanning, and PRD-STD-007 quality gates.
EU-AI-10: Conformity Assessment (Art. 43)
High-risk systems MUST undergo conformity assessment before placement on the market. Self-assessment is permitted for most Annex III systems. Third-party assessment via a notified body is required for biometric systems.
AEEF Mapping: ISO 42001 Certification Readiness and Pillar 5 Maturity Assessment.
EU-AI-11: EU Database Registration (Art. 49, 71)
Providers and deployers of high-risk AI systems MUST register the system in the EU database before market placement. Registration MUST include system description, intended purpose, conformity status, and member states of deployment.
AEEF Mapping: No direct AEEF equivalent — new compliance obligation. Organizations MUST add EU database registration to their AI product launch checklist.
EU-AI-12: Post-Market Monitoring (Art. 72)
Providers MUST establish a post-market monitoring system proportionate to the nature and risks. The system MUST actively and systematically collect, document, and analyze data on performance throughout the AI system's lifetime.
AEEF Mapping: Production Monitoring & Drift lifecycle controls.
EU-AI-13: Serious Incident Reporting (Art. 73)
Providers MUST report serious incidents to market surveillance authorities within 15 days (or immediately for widespread infringements). Serious incidents include death, serious damage to health/property/environment, and fundamental rights violations.
AEEF Mapping: Incident Response procedures, extended with EU-specific reporting timelines and authority notification.
EU-AI-14: General-Purpose AI Model Obligations (Art. 53-56)
GPAI model providers MUST: maintain technical documentation, provide information to downstream providers, comply with copyright law, and publish a training content summary. Systemic risk GPAI models face additional obligations including adversarial testing and incident reporting to the AI Office.
AEEF Mapping: PRD-STD-011 model documentation and PRD-STD-008 supply chain controls.
EU-AI-15: Transparency for AI-Generated Content (Art. 50)
AI systems generating synthetic content (text, audio, image, video) MUST ensure outputs are machine-readable as AI-generated. Deployers of emotion recognition or biometric categorization systems MUST inform exposed persons. AI-generated deepfakes MUST be labeled.
AEEF Mapping: Extend Code Provenance principles to AI product output marking.
High-Risk System Implementation Checklist
- Art. 5 prohibited practices screening completed
- High-risk classification determination documented (Art. 6)
- Risk management system established (Art. 9)
- Data governance requirements met (Art. 10)
- Technical documentation prepared (Art. 11, Annex IV)
- Automatic logging enabled (Art. 12)
- Transparency and instructions for use provided (Art. 13)
- Human oversight measures designed (Art. 14)
- Accuracy, robustness, cybersecurity validated (Art. 15)
- Conformity assessment completed (Art. 43)
- EU database registration submitted (Art. 49)
- Post-market monitoring system operational (Art. 72)
- Serious incident reporting process documented (Art. 73)
- AI-generated content marking implemented (Art. 50)
Enforcement Timeline
| Date | Milestone |
|---|---|
| 1 Aug 2024 | EU AI Act enters into force |
| 2 Feb 2025 | Prohibited practices (Art. 5) enforceable |
| 2 Aug 2025 | GPAI obligations enforceable; governance provisions apply |
| 2 Aug 2026 | Full enforcement for high-risk AI systems; penalties active (up to EUR 35M or 7% global turnover) |
| 2 Aug 2027 | High-risk systems embedded in regulated products (Annex I) enforceable |
Non-compliance penalties under the EU AI Act:
- Prohibited practices: Up to EUR 35 million or 7% of global annual turnover
- High-risk system obligations: Up to EUR 15 million or 3% of global annual turnover
- Incorrect information to authorities: Up to EUR 7.5 million or 1% of global annual turnover
Related AEEF Content
- Compliance & Regulatory
- AI Standards Crosswalk
- ISO 42001 Certification Readiness
- Security Risk Framework
- PRD-STD-010: AI Product Safety & Trust
- PRD-STD-011: Model & Data Governance
- PRD-STD-014: AI Product Privacy & Data Rights
- AI Product Lifecycle
External Sources
- EU AI Act official text: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng
- European AI Office: https://digital-strategy.ec.europa.eu/en/policies/ai-office