Trusted AI Governance
Avoid €35M fines. Achieve conformity assessment. Deploy high-risk AI systems with regulatory confidence before August 2027 enforcement.
View Compliance ServicesThe EU AI Act mandates different compliance requirements based on AI risk level. Enterprise boards must understand which category their systems fall under.
AI systems that pose unacceptable risk are banned entirely. No grace period, no conformity assessment—immediate prohibition.
AI systems that significantly impact health, safety, or fundamental rights. Require conformity assessment before market placement.
AI systems with transparency obligations. Users must be informed they're interacting with AI.
AI systems with minimal risk. No mandatory obligations but voluntary codes of conduct encouraged.
Regulatory deadlines you cannot miss—plan backwards from August 2027
| Date | Requirement | Who's Affected | Status |
|---|---|---|---|
| Feb 2, 2025 | Prohibited AI systems banned | All organizations using prohibited AI | ACTIVE |
| Aug 2, 2025 | General-purpose AI model requirements | Providers of GPAI models (e.g., LLM providers) | ACTIVE |
| Aug 2, 2026 | Limited-risk AI transparency obligations | Chatbots, deepfakes, emotion recognition | 7 MONTHS |
| Aug 2, 2027 | High-risk AI conformity assessment | Credit scoring, HR, insurance, critical infrastructure | 19 MONTHS |
| Aug 2, 2030 | High-risk AI in existing products (grandfathering ends) | Legacy high-risk AI systems deployed pre-regulation | 55 MONTHS |
End-to-end compliance from risk classification through conformity assessment and post-market monitoring
What's required to comply with Article 43 before August 2027
Comprehensive technical file demonstrating compliance with all requirements (Articles 9-15, Annex IV).
Quality management system ensuring consistent compliance throughout AI lifecycle (Article 17).
Internal control OR third-party notified body assessment (depending on Annex VI/VII classification).
Affix CE marking and draw up EU declaration of conformity once assessment successfully passed.
Register high-risk AI system in EU database before market placement (Article 71).
Ongoing monitoring, serious incident reporting, and technical documentation updates (Article 72).
Yes, if: (1) Your AI systems are placed on the EU market, (2) Your AI outputs are used in the EU, or (3) You're an EU-based user of AI systems. The EU AI Act has extraterritorial reach similar to GDPR. If you serve EU customers, have EU operations, or your AI affects EU persons, you're likely in scope—regardless of where your headquarters are located.
High-risk AI is defined in Annex III of the regulation. Key categories include: biometric identification, critical infrastructure, education/employment, law enforcement, migration/border control, administration of justice, and democratic processes. Additionally, AI used as safety components in products (medical devices, vehicles, machinery) regulated under existing EU legislation is automatically high-risk. Our Compliance Readiness assessment (£12K) provides definitive risk classification with legal justification.
EU AI Act is mandatory legal compliance for high-risk AI with specific conformity assessment requirements enforced by national authorities. ISO 42001 is a voluntary international standard for AI management systems. While ISO 42001 can help build governance foundations that support EU AI Act compliance, it doesn't substitute for conformity assessment. Many organizations pursue both: ISO 42001 for governance framework, EU AI Act compliance for legal obligation.
Fines are tiered by violation severity: (1) €35M or 7% of global annual turnover for prohibited AI violations, (2) €15M or 3% for high-risk AI non-compliance (failure to meet conformity requirements), (3) €7.5M or 1.5% for other violations including transparency failures. Enforcement authorities can also impose injunctions, product withdrawals, market bans, and temporary prohibitions. These are administrative fines—civil liability, private damages, and reputational harm are additional risks beyond regulatory penalties.
No. The EU AI Act deadlines are regulatory requirements with no extension mechanism or grace period. Organizations that fail to comply by deadlines face immediate enforcement action. However, there is a grandfathering provision: high-risk AI systems placed on the market before August 2, 2027 have until August 2, 2030 to achieve compliance—but only if they were already commercially deployed pre-regulation. This is not an extension but a transition provision for legacy systems.
It depends on your AI system classification under Annexes VI and VII. Some high-risk AI (e.g., biometrics per Annex III point 1, critical infrastructure management) requires mandatory third-party notified body assessment. Others allow internal conformity assessment if you have robust quality management systems. Annex VI specifies systems requiring third-party assessment; Annex VII covers internal assessment procedures. Our Compliance Readiness assessment identifies which conformity pathway applies to your specific AI systems.
Start your EU AI Act compliance journey today. High-risk conformity assessment takes 4-6 months—the closer you get to August 2027, the more expensive and rushed implementation becomes.
Request Compliance RoadmapIntegrate with ISO 42001 certification and our complete AI Governance services for comprehensive regulatory readiness.
Book a consultation to discuss your EU AI Act compliance requirements, high-risk AI classification, and conformity assessment pathway
Book a 30-minute consultation to discuss your EU AI Act compliance strategy, high-risk system classification, and implementation timeline.
For EU AI Act compliance inquiries and detailed discussions about conformity assessment and regulatory requirements.
Trusted AI Governance Ltd
London, United Kingdom
Company No: 15696417
We respond to EU AI Act compliance inquiries within 1 business day. Compliance projects typically start within 2 weeks of agreement.
Fill out the form below and we'll get back to you shortly