Quick Answer: The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework regulating artificial intelligence based on risk levels. It categorizes AI systems into four tiers—prohibited, high-risk, limited-risk, and minimal-risk—with penalties reaching €35 million or 7% of global turnover for non-compliance.
The EU Artificial Intelligence Act is a risk-based regulatory framework that entered into force on August 1, 2024, establishing harmonized rules for AI development, deployment, and use across all 27 EU member states. The Act applies extraterritorially to any organization whose AI systems affect EU citizens or companies, regardless of where the provider is headquartered.
The regulation assigns AI systems to four risk categories with corresponding obligations. Unacceptable-risk AI practices (like social scoring and emotion recognition in workplaces) are banned. High-risk systems require conformity assessments, EU database registration, and ongoing monitoring. Limited-risk AI must meet transparency requirements. Minimal-risk systems face no specific obligations.
The Act covers providers (developers), deployers (users), importers, distributors, and authorized representatives. General-purpose AI models like ChatGPT and GPT-4 have separate requirements under the GPAI framework that became effective August 2, 2025.
The data demonstrates urgent compliance needs:
Currently enforced provisions (as of September 2025):
Organizations using AI for employment decisions, credit scoring, law enforcement, critical infrastructure, or education must prepare for high-risk classification by August 2027.
Organizations deploying high-risk AI must implement:
Any organization whose AI systems are placed on the EU market or whose outputs are used in the EU, regardless of headquarters location. This includes providers (developers), deployers (users operating AI professionally), importers, distributors, and product manufacturers. Companies in North America, Asia, UK, and Switzerland deploying AI used by EU citizens or companies must comply. Small and medium enterprises receive proportional penalty considerations.
Prohibited AI practices: February 2, 2025 (already enforced). GPAI model requirements: August 2, 2025 (already enforced). Transparency obligations and AI literacy: August 2, 2026. High-risk AI systems: August 2, 2027 (36-month transition period). Public authorities using high-risk AI: August 2, 2026. Organizations should begin compliance preparation immediately as high-risk system conformity assessments require substantial lead time.
Prohibited AI practices: €35 million or 7% of global annual turnover (whichever is higher). Non-compliance with high-risk AI obligations: €15 million or 3% of global turnover. GPAI model violations: €15 million or 3% of global turnover. Supplying incorrect/misleading information: €7.5 million or 1% of global turnover. SMEs and startups receive reduced penalties. National supervisory authorities determine final penalty amounts based on violation severity, company size, and prior infringements.
Organizations using third-party AI systems remain responsible for compliance as deployers. If you use AI for employment decisions, credit scoring, or other high-risk purposes, you must ensure the AI provider supplies necessary documentation, the system meets EU requirements, and you implement proper human oversight. TPRM programs must now include AI Act compliance verification for all AI vendors. Supplier Shield helps assess vendor AI Act compliance as part of comprehensive third-party risk management.
An AI system is high-risk if it: (1) serves as a safety component in products covered by EU safety legislation (toys, aviation, cars, medical devices, lifts) requiring third-party conformity assessment, OR (2) falls into 8 specific areas listed in Annex III—critical infrastructure, education, employment, essential services, law enforcement, migration, justice, or biometric identification. Systems performing preparatory tasks or detecting patterns without replacing human decision-making may be exempt. Profiling individuals always qualifies as high-risk.
The EU AI Act is the world's first comprehensive AI regulation, establishing mandatory requirements for AI systems based on risk levels. With prohibitions already enforced, GPAI requirements active since August 2025, and full compliance deadlines approaching (2026-2027), organizations must immediately assess their AI systems, classify risk levels, and implement required governance structures.
Non-compliance carries severe financial penalties—up to €35 million or 7% of global revenue. The Act's extraterritorial reach means companies worldwide using AI that affects EU citizens must comply, making it the de facto global AI standard.
Supplier Shield helps European organizations ensure AI vendors comply with the EU AI Act through automated assessments, continuous monitoring, and documentation management—critical for meeting third-party AI risk management obligations under both the AI Act and NIS2 regulations.
Last Updated: September 29, 2025