In a landmark move, the European Union (EU) has introduced the EU Artificial Intelligence Act. This new legislation aims to position the EU as a global leader in human-centric, trustworthy artificial intelligence (AI) development and deployment.
The AI Act takes a risk-based approach, categorising AI systems into four levels: unacceptable, high, limited, and low risk. Systems deemed to pose unacceptable risks, such as social scoring or manipulative AI, are outright prohibited. High-risk AI systems, including those used in critical areas like education, employment, and law enforcement, face stringent compliance requirements.
One of the key features of the legislation is its extraterritorial reach. It applies not only to EU-based entities but also to providers and deployers outside the EU whose AI systems’ outputs are intended for use within the EU. This broad scope underscores the EU’s commitment to setting global standards for AI governance.
The legislation introduces new roles and responsibilities, defining “providers,” “distributors,” “importers,” and “deployers” of AI systems. Each role carries specific obligations, ensuring accountability throughout the AI supply chain.
Enforcement of the Act will be a collaborative effort involving national authorities and an AI Office within the European Commission. Penalties for non-compliance are severe, reaching up to €35 million or 7% of global annual turnover for the most serious violations.
Alongside the Act, the proposed AI Liability Directive aims to ensure fair compensation for those harmed by AI systems, addressing the unique challenges posed by AI’s complexity and opacity.
Despite entering into force across all 27 Member States on 1 August 2024, the majority of provisions of the law will not commence until 2 August 2026.
For a full reading of the entry, see here.