Part of our Compliance & Regulation series
Read the complete guideEU AI Act Compliance: What Businesses Need to Know in 2026
The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) entered into force on August 1, 2024 — the world's first comprehensive AI regulation, establishing a risk-based framework for AI systems deployed in or affecting the EU market. With obligations phasing in through 2027, 2026 is the critical year for high-risk AI system compliance planning: providers and deployers of high-risk AI must have conformity assessments, technical documentation, and governance frameworks in place before August 2, 2026.
The EU AI Act has extraterritorial reach comparable to GDPR: any AI system placed on the EU market or used in the EU — regardless of where the provider is based — falls within scope. For AI providers, developers, importers, distributors, and deployers serving EU markets or processing EU data, compliance is not optional.
Key Takeaways
- The EU AI Act establishes four risk tiers: Unacceptable Risk (prohibited), High Risk, Limited Risk, and Minimal Risk
- Prohibited AI systems must be withdrawn from the EU market by February 2, 2025
- High-risk AI systems face the most demanding obligations — conformity assessment, technical documentation, fundamental rights impact assessment, post-market monitoring
- General Purpose AI (GPAI) models (like GPT-4, Claude, Gemini) face specific obligations; systemic risk GPAI models face heightened requirements
- The European AI Office (established March 2024) oversees GPAI model obligations; national market surveillance authorities oversee high-risk AI
- Non-compliance fines: up to €35 million or 7% of global annual turnover for prohibited AI violations; up to €15 million or 3% for high-risk AI violations
- Notified bodies for third-party conformity assessment are being designated; many high-risk systems can self-assess
- Codes of practice and harmonised standards are being developed — monitor EASA and relevant bodies
EU AI Act Timeline and Phase-In
The AI Act's obligations are phased in over three years:
| Effective Date | Obligations Entering into Force |
|---|---|
| February 2, 2025 | Prohibited AI systems — must be withdrawn or modified |
| August 2, 2025 | GPAI model obligations; governance framework requirements (internal AI literacy, enforcement bodies) |
| August 2, 2026 | High-risk AI system obligations in Annex I and III; notified body requirements |
| August 2, 2027 | High-risk AI systems that are safety components of products already under EU harmonisation legislation; obligations for existing AI systems put into service before August 2, 2026 |
Risk Classification Framework
The AI Act classifies AI systems into four risk tiers:
Tier 1: Unacceptable Risk (Prohibited AI)
Article 5 absolutely prohibits the following AI systems in the EU from February 2, 2025:
- Subliminal manipulation: AI systems deploying subliminal techniques beyond awareness or deceptive techniques to materially distort behaviour in a manner that causes or is reasonably likely to cause significant harm
- Exploitation of vulnerabilities: AI that exploits vulnerabilities of specific groups (age, disability, social/economic situation) to materially distort behaviour
- Social scoring by public authorities: AI systems used by or on behalf of public authorities for evaluating or classifying individuals based on social behaviour or personal characteristics that leads to detrimental treatment
- Real-time biometric identification in public spaces: Remote real-time biometric identification systems used in publicly accessible spaces for law enforcement purposes (narrow exceptions for terrorism, serious crime, missing children)
- Biometric categorisation based on protected attributes: AI that categorises individuals based on biometrics to deduce race, ethnicity, religion, political opinions, sexual orientation, trade union membership
- Emotion recognition in workplace and education: AI systems that infer emotions in the workplace or education (with narrow exceptions)
- Crime prediction based on profiling: Predictive policing AI based solely on profiling without individual assessment
- Untargeted facial recognition database scraping: AI that creates or expands facial recognition databases through untargeted scraping
Action required: If your AI system falls into any of these categories, immediate withdrawal from the EU market is required. If any features of your AI product approach these definitions, legal review is essential.
Tier 2: High-Risk AI
High-risk AI systems (Articles 6 and Annex III) face the most extensive compliance obligations. High-risk status applies to:
AI as safety component of regulated products (Annex I): AI systems that are safety components of products subject to EU harmonisation legislation (medical devices, machinery, aviation, automotive, toys, lifts, pressure equipment, personal protective equipment, radio equipment, in vitro diagnostics, marine equipment, cableways, agricultural and forestry vehicles, rail systems, recreational craft, explosives)
Standalone high-risk AI systems (Annex III — 8 categories):
- Biometric identification and categorisation of natural persons: Real-time and post-remote biometric identification; biometric verification; biometric categorisation
- Critical infrastructure: AI managing or operating critical digital infrastructure, road traffic, or utilities (water, gas, heat, electricity)
- Education and vocational training: AI determining access to education, allocating educational opportunities, assessing students
- Employment and workers management: Recruitment and selection (CV filtering, interview assessment), performance evaluation, promotion/termination decisions, task allocation in gig economy platforms
- Access to essential private services and public services: AI evaluating creditworthiness (except small volume/purpose credit assessment), insurance risk assessment, medical insurance
- Law enforcement: Polygraphs, evidence reliability assessment, crime risk profiling, facial recognition in recordings for criminal investigations
- Migration, asylum, border control: AI for polygraph assessment of migrants/asylum seekers, risk assessment of illegal crossing, document verification, assisting applications
- Administration of justice and democratic processes: AI for interpreting facts and law in judicial decisions, influencing elections/voting behaviour
Tier 3: Limited Risk AI
AI systems with transparency obligations but no conformity assessment:
- Chatbots and AI interaction: Must disclose to users that they are interacting with an AI system (unless obvious from context)
- Emotion recognition and biometric categorisation: Disclose to individuals when they are subject to these systems
- Deepfakes: Label AI-generated content as artificially generated or manipulated (particularly important for election, news, educational content)
Tier 4: Minimal Risk AI
AI systems posing minimal risk (spam filters, AI-enabled video games, AI in manufacturing QC) face no specific AI Act obligations beyond general product law requirements. Encouraged to comply voluntarily with codes of conduct.
High-Risk AI System Obligations
If your AI system is classified as high-risk, the following obligations apply (Chapters 3 and 5):
1. Risk Management System (Article 9)
Establish and maintain a documented risk management system covering the entire lifecycle of the AI system:
- Identification and analysis of reasonably foreseeable risks to health, safety, and fundamental rights
- Estimation and evaluation of risks
- Evaluation against post-market monitoring data
- Risk management measures (residual risk acceptable, eliminated, or mitigated)
2. Data and Data Governance (Article 10)
Training, validation, and testing data must meet specific quality criteria:
- Relevant, representative, free of errors, and complete for intended purpose
- Examine for possible biases and take appropriate mitigation measures
- Bias detection, especially regarding protected characteristics (race, gender, age, disability)
- Data provenance documentation
3. Technical Documentation (Article 11 and Annex IV)
Prepare comprehensive technical documentation before placing on the market:
- General description of the AI system
- Detailed description of elements and development process
- Monitoring, functioning, and control of the system
- Validation and testing procedures
- Risk management documentation
- Changes made through the lifecycle
- List of harmonised standards applied
- Copy of EU declaration of conformity
4. Record-keeping and Logging (Article 12)
High-risk AI systems must have automatic logging enabled throughout operation:
- Recording of events relevant to assessing compliance
- Operational logs enabling identification of risks and incidents
- Logs retained for appropriate period based on use case (minimum 6 months for biometric identification; 3 years for employment AI)
5. Transparency and Information for Deployers (Article 13)
High-risk AI must be transparent enough for deployers to understand what it does. Providers must give deployers:
- Instructions for use (clear, complete, correct, comprehensible)
- Information about capabilities and limitations
- Performance metrics on specific groups
- Input data specifications
- Information to enable deployers to fulfil their fundamental rights impact assessment obligation
6. Human Oversight (Article 14)
High-risk AI must be designed and developed to enable human oversight:
- Ability to fully understand capabilities and limitations
- Ability to monitor operation and detect anomalies
- Ability to override, interrupt, or stop the system
- Ability to interpret outputs (especially biometric and employment AI)
- For fully automated decisions: oversight measures appropriate to risks
7. Accuracy, Robustness, and Cybersecurity (Article 15)
High-risk AI must achieve appropriate levels of:
- Accuracy appropriate for intended purpose
- Robustness to errors, faults, inconsistencies, and adversarial attack
- Cybersecurity throughout lifecycle; adversarial robustness assessment
8. Quality Management System (Article 17)
Providers must implement a quality management system covering:
- Strategy for regulatory compliance
- Techniques and processes for AI system design
- System validation and testing procedures
- Technical documentation maintenance
- Post-market monitoring
- Accountability framework and senior management sign-off
9. EU Declaration of Conformity (Article 47)
Providers must draw up a written EU Declaration of Conformity and affix the CE marking before placing on the market.
10. Registration in EU Database (Article 49)
High-risk AI systems (Annex III) must be registered in the EU database before placement on market. The EU AI Act database is being established by the European AI Office.
General Purpose AI (GPAI) Model Obligations
Chapter V (Articles 51–56) specifically addresses General Purpose AI models — large AI models trained on vast data that can perform a wide range of tasks (GPT-4, Claude, Gemini, Llama). Obligations apply to GPAI model providers, not deployers.
All GPAI Model Providers (Article 53)
- Prepare and maintain technical documentation for national authorities and the AI Office
- Provide information and documentation to downstream providers who integrate GPAI into their AI systems
- Comply with the copyright regulation (Directive 2001/29/EC) — provide summary of training data
- Publish summary of content used for training
Systemic Risk GPAI Models (Article 55)
GPAI models with systemic risk — defined as models trained with total compute exceeding 10^25 FLOPs — face heightened obligations:
- Adversarial testing (red-teaming) per state-of-the-art protocols
- Reporting serious incidents and corrective measures to AI Office
- Cybersecurity protection for model weights, architecture, and training data
- Energy efficiency reporting
The European AI Office maintains a list of systemic risk GPAI models. Current designated models include GPT-4 and comparable frontier models. As compute costs decrease, this threshold may capture more models over time.
Conformity Assessment Process
The conformity assessment determines whether a high-risk AI system complies with the AI Act requirements before market placement.
Self-assessment (Internal control — Annex VI): For most Annex III high-risk AI systems (except biometric identification), providers can self-assess conformity. This involves the provider conducting and documenting the full assessment against each applicable requirement, signing the Declaration of Conformity, and maintaining technical documentation.
Third-party assessment (Notified body): Required for biometric identification systems (Annex III, point 1) and safety-component AI in Annex I products where the relevant harmonisation legislation requires third-party assessment.
Notified bodies: Designated by EU member states and published in the NANDO database. Notified body designation for AI Act is ongoing — organisations should engage notified bodies early given likely capacity constraints.
AI Governance Framework Requirements
Articles 26 and 57–63 establish governance requirements for organisations that deploy (not just provide) high-risk AI:
Deployer obligations:
- Assign a human reviewer with authority to override AI decisions in high-risk use cases
- Ensure staff operating high-risk AI are trained and competent
- Conduct Fundamental Rights Impact Assessments (FRIAs) for certain high-risk AI deployed by public bodies or in certain private sector contexts
- Monitor operation of AI systems; report incidents to providers
- Keep logs of operation for minimum retention periods
General AI literacy: Article 4 requires providers and deployers to ensure their staff have sufficient AI literacy — understanding of AI capabilities, limitations, and risks relevant to their role.
EU AI Act Compliance Checklist
- AI system inventory completed — all AI systems used, provided, or deployed assessed
- Risk classification determined for each AI system (prohibited, high-risk, limited risk, minimal risk)
- Prohibited AI systems (Annex I) withdrawn or modified by February 2025
- GPAI model obligations assessed if providing LLMs or foundation models
- High-risk AI systems identified — conformity assessment approach determined (self or notified body)
- Technical documentation prepared for each high-risk AI system
- Risk management system documented
- Data governance procedures for training/validation/testing data documented
- Automatic logging implemented in high-risk AI systems
- Human oversight mechanisms implemented for high-risk AI
- EU Declaration of Conformity prepared and CE marking applied
- High-risk AI systems registered in EU database
- Quality management system for AI development established
- Instructions for use prepared for deployers
- Post-market monitoring plan established
- AI literacy programme for relevant staff implemented
- FRIA process established for applicable deployer contexts
Frequently Asked Questions
Does the EU AI Act apply to AI tools we use internally, not sold externally?
Yes, where those tools fall into the high-risk categories. The AI Act applies to both providers (who develop and place AI on the market) and deployers (who use AI systems in professional contexts). An organisation that deploys a third-party high-risk AI system (e.g., an AI-powered recruitment screening tool) has deployer obligations including human oversight, staff training, and fundamental rights impact assessments. The provider of that tool has provider obligations including conformity assessment and technical documentation.
We use OpenAI's API to build an AI feature — are we the provider or deployer?
You are likely a provider of a high-risk AI system if the overall AI system you deploy falls into a high-risk category (Annex III). OpenAI is a GPAI model provider with its own obligations under Chapter V. When you integrate a GPAI model into a specific AI application for a regulated use case (e.g., CV screening, credit assessment), you become the provider of that specific AI system and bear the Annex III high-risk obligations. OpenAI (as GPAI model provider) must give you technical documentation and information to enable your compliance.
What is a Fundamental Rights Impact Assessment (FRIA) and when is it required?
A FRIA is a documented assessment of how a high-risk AI system might affect fundamental rights — privacy, non-discrimination, freedom of expression, access to justice, etc. Under Article 27, deployers of high-risk AI systems that are public bodies, or private operators providing essential services (banking, education, healthcare), must conduct a FRIA before deploying the system. The assessment considers: what rights might be affected, what risks arise, how risks can be mitigated, who is responsible. The FRIA must be registered with the relevant market surveillance authority.
How does the EU AI Act interact with GDPR?
The two regulations are complementary. GDPR applies when AI systems process personal data. The AI Act applies to AI systems regardless of whether they process personal data — it covers the AI system design, deployment, and oversight. Both apply simultaneously when high-risk AI processes personal data: GDPR's requirements for lawful basis, data minimisation, and DPIA requirements apply to the data processing; AI Act's requirements for risk management, logging, human oversight, and conformity assessment apply to the AI system. Where the AI Act and GDPR have overlapping requirements (e.g., transparency, automated decision-making), compliance with both must be achieved.
What are the penalties for AI Act violations?
Fines are tiered by violation severity: (1) Prohibited AI violations (Article 5): up to €35 million or 7% of global annual turnover, whichever is higher; (2) Other high-risk AI obligations violations: up to €15 million or 3% of global annual turnover; (3) Providing incorrect, incomplete, or misleading information to authorities: up to €7.5 million or 1.5% of global annual turnover. SMEs and startups have lower maximum amounts for obligations under clauses (2) and (3). The European AI Office has enforcement authority over GPAI models; national market surveillance authorities enforce in their territories.
When do we need to register our AI system in the EU database?
High-risk AI systems covered under Annex III must be registered before being placed on the EU market or put into service. The EU AI Act database is being established by the European AI Office. The August 2, 2026 deadline is when registration obligations fully apply for standalone high-risk AI (Annex III). Registration requires: provider details, AI system name and version, intended purpose, categories of users, high-risk category, conformity assessment details, and Declaration of Conformity reference.
Next Steps
The EU AI Act represents a fundamental shift in how AI systems must be developed, assessed, and deployed in the EU market. For technology companies building AI products — whether internal tools or customer-facing applications — compliance requires integrating AI governance into your product development lifecycle from design through deployment and post-market monitoring.
ECOSIRE's OpenClaw AI platform services are built with EU AI Act compliance in mind. Our team helps businesses assess their AI systems under the Act's risk framework, implement required governance controls, and prepare for conformity assessment.
Explore AI compliance services: ECOSIRE OpenClaw Services
Disclaimer: This guide is for informational purposes only and does not constitute legal advice. EU AI Act implementation guidance, harmonised standards, and codes of practice are still being developed. Consult qualified EU legal counsel for advice specific to your AI systems.
Written by
ECOSIRE TeamTechnical Writing
The ECOSIRE technical writing team covers Odoo ERP, Shopify eCommerce, AI agents, Power BI analytics, GoHighLevel automation, and enterprise software best practices. Our guides help businesses make informed technology decisions.
ECOSIRE
Grow Your Business with ECOSIRE
Enterprise solutions across ERP, eCommerce, AI, analytics, and automation.
Related Articles
AI Agents for Business: The Definitive Guide (2026)
Comprehensive guide to AI agents for business: how they work, use cases, implementation roadmap, cost analysis, governance, and future trends for 2026.
ERP for Chemical Industry: Safety, Compliance & Batch Processing
How ERP systems manage SDS documents, REACH and GHS compliance, batch processing, quality control, hazmat shipping, and formula management for chemical companies.
ERP for Import/Export Trading: Multi-Currency, Logistics & Compliance
How ERP systems handle letters of credit, customs documentation, incoterms, multi-currency P&L, container tracking, and duty calculation for trading companies.
More from Compliance & Regulation
Cybersecurity for E-commerce: Protect Your Business in 2026
Complete ecommerce cybersecurity guide for 2026. PCI DSS 4.0, WAF setup, bot protection, payment fraud prevention, security headers, and incident response.
ERP for Chemical Industry: Safety, Compliance & Batch Processing
How ERP systems manage SDS documents, REACH and GHS compliance, batch processing, quality control, hazmat shipping, and formula management for chemical companies.
ERP for Import/Export Trading: Multi-Currency, Logistics & Compliance
How ERP systems handle letters of credit, customs documentation, incoterms, multi-currency P&L, container tracking, and duty calculation for trading companies.
Sustainability & ESG Reporting with ERP: Compliance Guide 2026
Navigate ESG reporting compliance in 2026 with ERP systems. Covers CSRD, GRI, SASB, Scope 1/2/3 emissions, carbon tracking, and Odoo sustainability.
Audit Preparation Checklist: Getting Your Books Ready
Complete audit preparation checklist covering financial statement readiness, supporting documentation, internal controls documentation, auditor PBC lists, and common audit findings.
Australian GST Guide for eCommerce Businesses
Complete Australian GST guide for eCommerce businesses covering ATO registration, the $75,000 threshold, low value imports, BAS lodgement, and GST for digital services.