AI Oversight: Global Regulatory Tracker - European Union
January 19, 2025
The EU launches the groundbreaking EU AI Act, striving to establish itself as a leading center for human-centered, reliable AI. This is a brief article covering areas of the Act. This article is intended to provide general information for interested readers. It is not comprehensive and should not be considered as legal advice due to its general nature. Please refer to the link for recent laws and regulations governing AI (the "AI Regulations")
The main legislative framework for AI regulation in the EU is the EU AI Act (here). Additionally, the EU has proposed the AI Liability Directive (here), aimed at ensuring that liability rules are effectively applied to claims related to AI.
Status of AI Regulations
The EU AI Act, recognized as the first comprehensive regulatory framework for artificial intelligence within the European Union, was officially published in the EU Official Journal on July 12, 2024. This legislation will take effect on August 1, 2024, but its provisions will not be enforceable until August 2, 2026, with certain exceptions specified in Article 113.
Currently, the AI Liability Directive remains in draft form and has yet to be reviewed by the European Parliament and the Council of the EU, leaving its timeline uncertain. Additionally, on September 5, 2024, the Council of Europe’s Framework Convention on AI was signed by several countries, including Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom, Israel, the United States, and the European Union. This treaty will come into effect after five signatories, including three from the Council of Europe, ratify it, and will be open to countries globally that wish to abide by its principles.
A variety of existing laws in the EU may impact AI development and usage, including:
• The EU General Data Protection Regulation (EU) 2016/679
• The forthcoming Product Liability Directive, which, if passed, will enable individuals harmed by AI software to seek compensation from manufacturers
• The General Product Safety Regulation 2023/988/EU, which updates previous directives
• Various intellectual property laws specific to each EU Member State.
Definition of AI
The EU AI Act defines "AI" with several key terms:
• An "AI system" is a machine-based system designed to operate with different levels of autonomy and may adapt after deployment, generating outputs such as predictions or recommendations based on its received inputs.
• A "general-purpose AI model" refers to an AI model capable of performing diverse tasks, provided it is not solely used for research or development prior to market entry.
• A "general-purpose AI system" is built on a general-purpose AI model and can serve multiple functions, either directly or integrated into other AI systems.
The AI Liability Directive is expected to adopt similar definitions as the EU AI Act.
Territorial and Sectoral Scope
The EU AI Act has extraterritorial applicability, affecting:
• Providers offering AI systems or general-purpose models in the EU market, regardless of their location.
• Deployers of AI systems established within the EU.
• Providers or deployers located outside the EU, if their AI outputs are intended for use within the EU.
• Conversely, the AI Liability Directive is relevant for non-contractual civil law claims made within the EU.
Both the EU AI Act and the AI Liability Directive are broad in scope, as they apply to all sectors without limitation.
Roles and Compliance Under the EU AI Act:
• Developers of AI systems or models, along with various stakeholders including public authorities, are designated as "providers."
• Entities involved in the supply chain, excluding providers or importers, are termed "distributors."
• Those who market AI systems bearing the name of a foreign entity are recognized as "importers."
• Users of AI systems, excluding personal non-professional use, are classified as "deployers."
• The term "operators" encompasses all parties involved, including providers, manufacturers, and distributors.
• Each role entails specific compliance responsibilities, with the AI Liability Directive enhancing the likelihood of successful claims against AI system developers or users.
Core Objectives
The EU AI Act aims to foster trustworthy AI while ensuring high protection standards for health, safety, rights, and democracy against the risks posed by AI technologies. The AI Liability Directive seeks to afford individuals harmed by AI systems the same protections as those injured by other technologies, addressing the difficulties of proving liability due to the complex and autonomous nature of AI.
Risk Assessment and Compliance Requirements
The AI Act categorizes AI systems into four risk levels, establishing varied compliance requirements:
• Unacceptable Risk: These systems are prohibited entirely, including those employing social scoring or deceptive practices.
• High Risk: Subjected to rigorous compliance, these systems must be registered before market entry and adhere to extensive requirements regarding data training, documentation, human oversight, and security.
• Limited Risk: Systems that interact with individuals (such as chatbots) must follow transparency obligations.
• Low/Minimal Risk: Any AI not fitting the above categories faces no specific compliance requirements.
• The Act also introduces codes of conduct intended to be voluntarily adopted by AI providers.
Regulatory Framework
The enforcement of the EU AI Act involves collaborative oversight by national authorities appointed by EU Member States, responsible for maintaining compliance and conducting assessments. A dedicated AI Office within the European Commission will support enforcement efforts, complemented by scientific experts and a Board representing Member States to ensure consistent application of the regulations.
Additionally, national courts will oversee the implementation of the AI Liability Directive related to civil law claims.
Enforcement and Penalties
Market surveillance authorities are empowered to act if AI systems do not comply with regulations or pose risks despite compliance. They can mandate corrective actions or restrict market access for non-compliant AI systems.
Penalties for violations can be substantial, with fines reaching up to €35 million or 7% of a company’s global turnover for serious breaches, while lesser penalties may apply for misleading information submissions.
The AI Liability Directive creates a rebuttable presumption of causation, simplifying the process for claimants in proving liability related to AI system damages, and grants courts powers to demand evidence disclosure for high-risk AI systems implicated in harm.
Notes and references:
1. EU AI Act, Article 113.
2. Procedure File: 2022/0303(COD) | Legislative Observatory | European Parliament (europa.eu).
3. Convention text here.
4. European Commission press release here.
5. EU AI Act, Articles 3(1), 3(63) and 3(66).
6. AI Liability Directive, Article 2(1).
7-8. EU AI Act, Articles 2(1)(a) to (c). Responsibilities along the AI value chain (including distributors, importers, deployers) are set out in Article 25, EU AI Act, Recital 22.
9. AI Liability Directive. Article 1(2).
10-14. EU AI Act, Article 3(3), 3(7), 3(6), 3(4), 3(8).
15. AI Liability Directive, Article 4(b).
16. "Purpose" in the Procedure File at printficheglobal.pdf (europa.eu); and EU AI Act, Article 1(1).
17. EU AI Act, Recital 179
18. EU AI Act, Article 5.
19. EU AI Act, Article 6(1), (2) and Annex I, Annex III.
20. EU AI Act, Articles 50(1) to 50(4).
21. EU AI Act, Article 50(4).
22. page 4 of the briefing note.
23. EU AI Act, Article 51.
24. EU AI Act, Articles 8-15 and 49.